Speed Isn’t Everything

The marketing are of the broadband industry spends a lot of time convincing folks that the most important part of a broadband product is download speed. This makes sense if fiber of cable are competing in a market against slower technologies. But it seems like most advertising about speed is to convince existing customers to upgrade to faster speeds. While download speed is performance, the industry doesn’t spend much time talking about the other important attributes of broadband.

Upload Speed. Households that make multiple simultaneous upload connections like video calls, gaming, or connecting to a work or school server quickly come to understand the importance of upload speeds if they don’t have enough. This was the primary problem that millions of households subscribed to cable companies encountered during the pandemic when they suddenly were using a lot of upload. Many homes still struggle with this today, and too many people upgrade to faster download speeds, hoping to solve the problem. ISPs using technologies other than fiber rarely mention upload speed.

Oversubscription. Home broadband connections are served by technologies that share bandwidth across multiple customers. Your ISP is very unlikely to tell you the number of people sharing your node or the amount of bandwidth feeding your node. The FCC’s broadband labels require ISPs to disclose their network practices, but nobody tells you statistics like this that would help you compare the ISPs competing for your business. The cable industry ran afoul of this issue fifteen years ago when large numbers of homes began streaming video, and many ran into it again during the pandemic. It still happens today any time a neighborhood has more demand than the bandwidth being supplied.

Latency. The simple description of latency is the delay in getting the packets to your home for something sent over the Internet. Latency increases any time that packets have to be resent and pile up. If enough packets get backlogged, latency can make it difficult or impossible to maintain a real-time connection. Latency issues are behind a lot of the problems that people have with Zoom or Teams calls – yet most folks assume the problem is not having fast enough speed.

Prioritization. A new problem for some broadband customers is prioritization. Customers buying FWA cellular wireless are told upfront that their usage might be slowed if there is too much cellular demand at a tower. Cellular carriers clearly (and rightfully) give priority to cell phones users over home broadband. Starlink customers who buy mobile broadband are given the same warning. Starlink will prioritize normal customers in an area over campers and hikers. Most ISPs say they don’t prioritize, but as AI is introduced into networks it will be a lot easier for them to do so. Over the last few months I’ve seen that several big ISPs are considering selling a priority (and more expensive) connection to gamers at the expense of everybody else.

Your Home Network. Everybody wants to blame the ISP when they have problems. However, a large percentage of broadband problems come from WiFi inside the home. People keep outdated and obsolete WiFi routers that are undersized for their bandwidth. Customers try to reach an entire home from a single WiFi device. Even when customers use WiFi extenders and mesh networks to reach more of the home, they often deploy the devices poorly. If you are having any broadband problems, give yourself a present and buy a new WiFi router.

Reliability. If operated properly, fiber networks tend to be the most reliable. But there are exceptions, and it all boils down to the quality of your local ISP as it does to the technology. It’s hard to say that any factor is more important than reliability if your ISP regularly has network outages when you want to use broadband.

New Technology to Lower Latency

There is a new network tool that’s starting to be eased into networks that can significantly lower latency. The new standard L4S (Low Latency, Low Loss, Scalable Throughput) was released in January 2023.

You might ask why we need better latency. Most of you have taken speed tests that measure latency with a ping test. That’s a measure of the time it takes for a single connection between your ISP and your router. The FCC says any latency below 100 ms (milliseconds) is acceptable, and unless you’re using high-orbit satellite broadband, you’ll likely never see a ping latency over 100 ms.

However, a ping test doesn’t tell you anything about the latency while using broadband. A lot of the problems users have with broadband come from latency issues when the network is under load. This latency is often referred to as buffer bloat. Measuring latency under load means looking at the accumulated latency from all network components of an Internet connection. Every component in the network – the switches and routers at both ends of a connection plus your modem each has a limit on the volume of data that can be carried at any given second. If everything is working right, then latency under load is low since packets are being delivered to your computer as intended.

Connections are rarely perfect, and that’s when troubles begin. Let’s say that your home router gets temporarily busy because the folks in your home are doing multiple tasks at the same time. If your home connection gets busy, the packets can pile up, and many get dropped. This prompts the originating ISP to resend packets. Your home router has a buffer that is supposed to compensate for this by temporarily holding packets, but that often doesn’t work as planned, particularly for real-time transmissions like a Teams video conference. Every time packets have to be resent adds more time to the latency for a particular connection, and the more packets that are coming in due to resent packets, the greater the chance of even more backlog.

You may not have noticed, but the Ookla speed test also tells you about your latency under load. Immediately to the right of the ping latency is the average download and upload latency during the speed test. These two readings are a better indicator of your network performance.

If you really want to understand your latency, watch those numbers during the speed test. In writing this blog, I took speed tests on my computer and cellphone. My ping on Charter was 34 ms – a little slower than what I normally see. The average download latency was 147 ms, and during the test, I saw one reading over 400 ms. The average upload latency was 202 ms, with the highest reading I saw at 695 ms (seven tenths of a second). My AT&T cellphone latencies were higher (which is normal). The ping time was 46 ms. The average download latency was 585 ms, with the highest reading I saw at over 1 second. The average upload latency was 102 ms, with the highest reading over 300 ms. The high readings of latency under load explain why I often struggle with real-time activities.

How does L4S fix this problem? First, the various components of your network have to enable L4S. The most important components are the originating switch, the switches at your ISP, and your home router. When these network components have enabled the L4S technology, the goal is to reduce the time that packets are waiting in queue. L4S adds an indicator to packets to report the experience they had moving through the Internet. L4S doesn’t react if everything is working fine. But if there are delays, the originator of the transmission is asked to slow down the rate of sending packets (as are other enabled components in the network. This temporary slowdown stops packets from building up and can drastically reduce the percentage of dropped packets.

Comcast has started to work L4S into their networks in some of its major markets. They report that the technology can cut load latency at least in half, in some cases bringing the latency under load to close to the ping latency.

The real key to making this work is to have the largest content providers build L4S into their networks. For example, a gaming app would need to make sure L4S is enabled at their serving data center to take advantage of the improved latency. If the Comcast trials are successful, it seems likely that a lot of the industry will adopt L4S, and savvy users will avoid applications that don’t use it.

There will be an interesting shift in the industry if use of L4S become widespread. A lot of customers have upgraded broadband speeds to get better performance but found that they didn’t see a big improvement. It a lot of cases, the real culprit in bad performance is buffer bloat. If this gets introduced everywhere, customers might find they are satisfied with slower broadband speeds.

If you to dig deeper into the new standard, you can find it here.

 

Is 5G Faster than 4G?

Ookla recently tackled this question in one of its research articles. Ookla compared the time it takes to load pages for Facebook, Google, and YouTube on cellphones using 4G LTE networks versus 5G networks.

Ookla thinks that page load speed is a great way to measure cellphone experience. The time needed to load a web page is directly impacted by latency, which measures the lag between the time a phone requests a website and that website responds. You might think that when your phone asks to see a website that the Internet just facilitates the connection. In reality, the web process is not perfect, and not every bit from a web site make it to your phone on the first try. In a normal web connection, the receiving ISP might need to make five to seven requests to resend missing bits until a connection is made between a web site and your phone. Latency measures the sum of the needed transactions.

Page load time is a critical statistic for eCommerce sites like Amazon. Ookla cites an article from Medium that quantifies the impact of slow page loads. According to the article:

  • 47% of users expect a page to load in 2 seconds or less.
  • 40% of users will abandon a website if it takes more than 3 seconds to load.
  • Surveys have shown that a delay of 1 second reduces customer satisfaction for using a website by 16%.
  • 79% of shoppers who are not satisfied with a website’s performance are less likely to buy from the same site again.
  • This has a huge impact on eCommerce. According to the article, every 1 second delay in page load time costs Amazon $2.1 billion in sales per year.

Ookla also cited an older Ookla article which is a great primer on why latency matters.

So how did 5G compare to 4G LTE in the U.S.? According to Ookla, 5G improved page load times by 21% to 26% for the three popular web sites.

This will surprise some folks. I know several people who swear that 4G LTE is faster than 5G – and they might be right in their immediate neighborhood. Also note that measuring page load time is not the same as measuring speed on a speed test.

We should step back and look at the difference between 4G LTE and 5G. To a large degree, these are the same technology, and the difference is the frequency being used by the cellphone. The big carriers all established new bands of frequency they labeled as 5G, but initially operated these new bands with the identical specifications operating the 4G LTE networks. Over time, the carriers have introduced a few 5G improvements in the 5G portion of the network – but the long list of whiz-bang improvements that were promised by 5G have never materialized. Your 5G phone is still not using network slicing and other improvements that promised a big technology leap for 5G.

Also note that Ookla is reporting national statistics, and it’s likely that these statistics vary by market. It’s also likely that the performance of the two technologies differs during the day as the load on the two networks ebbs and flows. But the Ookla statistics show that, overall, there is better performance on 5G. Perhaps the article should have come with the traditional advertising warning: “Note that your performance might vary.”

FCC Considers New Definition of Broadband

On November 1, the FCC released a Notice of Inquiry that asks about various topics related to broadband deployment. One of the first questions asked is if the definition of broadband should be increased to 100/20 Mbps. I’ve written about this topic so many times over the years that writing this blog almost feels like déjà vu. Suffice it to say that the current FCC with a newly installed fifth Commissioner finally wants to increase the definition of broadband to 100/20 Mbps.

The NOI asks if that definition is sufficient for the way people use broadband today. Of most interest to me is the discussion of the proposed 20 Mbps definition of upload speed. Anybody who follows the industry knows that the use of 20 Mbps to define upload speeds is a political compromise that is not based upon anything other than extreme lobbying by the cable industry to not set the number higher. The NOI cites studies that say that 20 Mbps is not sufficient for households with multiple broadband users, yet the FCC still proposes to set the definition at 20 Mbps.

There are some other interesting questions being asked by the NOI. The FCC asks if it should rely on its new BDC broadband maps to assess the state of broadband – as if they have an option. The answer to anybody who digs deep into the mapping data is a resounding no, since there are still huge numbers of locations where speeds claimed in the FCC mapping are a lot higher than what is being delivered. The decision by the FCC to allow ISPs to report marketing speeds doomed the maps to be an ISP marketing tool rather than any accurate way to measure broadband deployment. It’s not hard to predict a time in a few years when huge numbers of people start complaining about being missed by the BEAD grants because of the inaccurate maps. But the FCC has little choice but to stick with the maps it has heavily invested it.

The NOI asks if the FCC should set a longer-term goal for future broadband speeds, like 1 Gbps/500 Mbps. This ignores the more relevant question about the next change in definition that should come after 100/20 Mbps. According to OpenVault, over 80% of U.S. homes already subscribe to download speeds of 200 Mbps or faster, and that suggests that 100 Mbps download is already behind the market. The NOI should be discussing when the definition ought to be increased to 200 or 300 Mbps download instead of a theoretical future definition change.

Setting a future theoretical speed goal is a feel-good exercise to make it sound like FCC policy will somehow influence the forward march of technology upgrades. This is exactly the sort of thing that talking-head policy folks do when they create 5-year and 10-year broadband plans. But I find it impossible to contemplate that the FCC will change the definition of broadband to gigabit speeds in the next decade, because doing so would be saying that every home that doesn’t have a gigabit option would not have broadband. Without that possibility, setting a high target goal is largely meaningless.

The NOI also asks if the FCC should somehow consider latency and packet loss – and the answer is that of course they should. However, they can’t completely punt on the issue like they do today when FCC grants and subsidies only require a latency under 100 milliseconds and set no standards for packet loss. Setting latency requirements that everybody except high-orbit satellites can easily meet is like having no standard at all.

Of interest to rural folks is a long discussion in the NOI about raising the definition of cellular broadband from today’s paltry 5/1 Mbps. Mobile speeds in most cities have download speeds today greater than 150 Mbps, often faster. The NOI suggests that a definition of mobile broadband ought to be something like 35/3 Mbps – something that is far slower than what a urban folks can already receive. But talking about a definition of mobile broadband ignores that any definition of mobile broadband is meaningless in the huge areas of the country where there is practically no mobile broadband coverage.

One of the questions I find most annoying asks if the FCC should measure broadband success by the number of ISPs available at a given location. This is the area where the FCC broadband maps are the most deficient. I wrote a recent blog that highlighted that seven or eight of the ten ISPs that claim coverage at my house aren’t real broadband options. Absolutely nobody is analyzing or challenging the maps for ISPs in cities that claim coverage that is either slower than claimed or doesn’t exist. But it’s good policy fodder for the FCC to claim that many folks in cities have a dozen broadband options. If it were only so.

Probably the most important question asked in the NOI is what the FCC should do about the millions of homes that can’t afford broadband. The FCC asks if it should adopt a universal service goal. This question has activated the lobbyists of the big ISPs who are shouting that the NOI is proof that the FCC wants to regulate and lower broadband rates. The big ISPs don’t even want the FCC to compile and publish data that compares broadband penetration rates to demographic data and household incomes. This NOI is probably not the right forum to ask that question – but solving the affordability gap affects far more households than the rural availability gap.

I think it’s a foregone conclusion that the FCC will use the NOI to adopt 100/20 Mbps as the definition of broadband. After all, the FCC is playing catchup to Congress, which essentially reset the definition of broadband to 100/20 Mbps two years ago in the BEAD grant legislation. The bigger question is if the FCC will do anything meaningful with the other questions asked in the NOI.

Getting Ready for the Metaverse

In a recent article in LightReading, Mike Dano quotes Dan Rampton of Meta as saying that the immersive metaverse experience is going to require a customer latency between 10 and 20 milliseconds.

The quote came from a presentation at the Wireless Infrastructure Association Connect (WIAC) trade show. Dano says the presentation there was aimed at big players like American Tower and DigitalBridge, which are investing heavily in major data centers. Meta believes we need a lot more data centers closer to users to speed up the Internet and reduce latency.

Let me put the 10 – 20 millisecond latency into context. Latency in this case would be the total delay of signal between a user and the data center that is controlling the metaverse experience. Meta is talking about the network that will be needed to support full telepresence where the people connecting virtually can feel like they are together in real time. That virtual connection might be somebody having a virtual chat with their grandmother or a dozen people gaming.

The latency experienced by anybody connected to the Internet is the accumulation of a number of small delays.

  • Transmission delay is the time required to get packets from a customer to be ready to route to the Internet. This is the latency that starts at the customer’s house and traverses the local ISP network. This delay is caused to some degree by the quality of the routers at the home – but the biggest factor in transmission delay is related to the technology being used. I polled several clients who tell me the latency inside their fiber network typically ranges between 4 and 8 milliseconds. Some wireless technologies also have low latency as long as there aren’t multiple hops between a customer and the core. Cable HFC systems are slower and can approach the 20 ms limit, and older technologies like DSL have much larger latencies. Satellite latencies, even the low-orbit networks, will not be fast enough to meet the 20 ms goal established by Meta due to the signal having to travel from the ground to a satellite and back to the Internet interface.
  • Processing delay is the time required by the originating ISPs to decide where a packet is to be sent. ISPs have to sort between all of the packets received from users and route each appropriately.
  • Propagation delay is due to the distance a signal travels outside of the local network. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Baltimore and Washington DC.
  • Queuing delays are the time required at the terminating end of the transmission. Since a metaverse connection is almost certainly going to be hosted at a data center, this is the time it takes to receive and appropriately route the signal to the right place in the data center.

It’s easy to talk about the metaverse as if it’s some far future technology. But companies are currently investing tens of billions of dollars to develop the technology. The metaverse will be the next technology that will force ISPs to improve networks. Netflix and streaming video had a huge impact on cable and telephone company ISPs, which were not prepared to have multiple customers streaming video at the same time. Working and schooling from home exposed the weakness of the upload links in cable company, fixed wireless, and DSL networks. The metaverse will push ISPs again.

Meta’s warning is that ISPs will need to have an efficient network if they want their customers to participate in the metaverse. Packets need to get out the door quickly. Networks that are overloaded at some times of the day will cause enough delay to make a metaverse connection unworkable. Too much jitter will mean resending missed packets, which adds significantly to the delay. Networks with low latency like fiber will be preferred. Large data centers that are closer to users can shave time off the latency. Customers are going to figure this out quickly and migrate to ISPs that can support a metaverse connection (or complain loudly about ISPs that can’t). It will be curious to see if ISPs will heed the warnings coming from companies like Meta or if they will wait until the world comes crashing down on their heads (which has been the historical approach to traffic management).

Jitter – A Measure of Broadband Quality

Most people have heard of latency, which is a measure of the average delay of data packets on a network. There is another important measure of network quality that is rarely talked about. Jitter is the variance in the delays of signals being delivered through a broadband network connection. Jitter occurs when the latency increases or decreases over time.

We have a tendency in the industry to oversimplify technical issues. We take a speed test and assume the answer that pops out is our speed. Those same speed tests also measure latency, and even network engineers sometimes get mentally lazy and are satisfied to see an expected latency number on a network test. But in reality, the broadband signal coming into your home is incredibly erratic. From millisecond to millisecond, the amount of data hitting your home network varies widely. Measuring jitter means measuring the degree of network chaos.

Jitter increases when networks get overwhelmed, even temporarily. Delays are caused in any network when the amount of data being delivered exceeds what can be accepted. There are a few common causes of increased jitter:

·         Not Enough Bandwidth. Low bandwidth connections experience increased jitter when incoming packets exceed the capacity of the broadband connection. This effect can cascade and multiply when the network is overwhelmed – being overly busy increases jitter, and the worse jitter then makes it even harder to receive incoming packets.

·         Hardware Limitations. Networks can bog down when outdated routers, switches, or modems can’t fully handle the volume of packets. Even issues like old or faulty cabling can cause delays and increase jitter.

·         Network Handoffs. Any network bottlenecks are the most vulnerable point in the network. The most common bottleneck in all of our homes is the device that converts landline broadband into WiFi. Even a slight hiccup at a bottleneck will negatively impact performance in the entire network.

All of these factors help to explain why old technology like DSL performs even worse than might be expected. Consider a home that has a 15 Mbps download connection on DSL. If an ISP were to instead deliver a 15 Mbps connection on fiber, the same customer would see a significant improvement. A fiber connection would avoid the jitter issues caused by antiquated DSL hardware. We tend to focus on speeds, but a 100 Mbps connection on a fiber network will typically have a lot less jitter than a 100 Mbps connection on a cable company network. Customers who try a fiber connection for the first time commonly say that the network ‘feels’ faster – what they are noticing is the reduced jitter.

Jitter can be deadliest to real-time connections – most people aren’t concerned about jitter if means it takes a little longer to download a file. But increased jitter can play havoc with an important Zoom call or with maintaining a TV signal during a big sports event. It’s easiest to notice jitter when a real-time function hesitates or fails. Your home might have plenty of download bandwidth, and yet broadband connections still fail because small problems caused by jitter can accumulate to make the connection fail.

ISPs have techniques that can help to control jitter. One of the more interesting ones is to use a jitter buffer that grabs and holds data packets that arrive too quickly. It may not feel intuitive that slowing a network can improve quality. But recall that jitter is caused when there is a time delay between different packets in the same transmission. There is no way to make the slowest packets arrive any sooner – so slowing down the fastest ones increases the chance that Zoom call packets can be delivered evenly.

Fully understanding the causes of jitter in any specific network is a challenge because the causes can be subtle. It’s often hard to pinpoint a jitter problem because it can be here one millisecond and gone the next. But it’s something we should be discussing more. A lot of the complaints people have about their broadband connection are caused by too-high jitter.

Gaming and Broadband Demand

Broadband usage has spiked across the US this year as students and employees suddenly found themselves working from home and needing broadband to connect to school and work servers. But there is another quickly growing demand for broadband coming from gaming.

We’ve had online gaming of some sort over the last decade, but gaming has not been data-intensive activity for ISPs. Until recently, the brains for gaming has been provided by special gaming computers or game boxes run locally by each gamer. These devices and the game software supplied the intensive video and sound experience and the Internet was only used to exchange game commands between gamers. Command files are not large and contain the same information that is exchanged between a game controller and a gaming computer. In the past, gamers would exchange the command files across the Internet, and local software would interpret and activate the commends being exchanged.

But the nature of online gaming is changing rapidly. Already, before the pandemic, game platforms had been migrating online. Game companies are now running the core software for games in a data center and not on local PCs or game consoles. The bandwidth path required between the data center core and a gamer is much larger than the command files that used to be exchanged since the data path now carries the full video and music signals as well as 2-way communications between gamers.

There is a big benefit of online gaming for gamers, assuming they have enough bandwidth to participate. Putting the gaming brains in a data center reduces the latency, meaning that game commands can be activated more quickly. Latency is signal delay, and the majority of the delay in any internet transmission happens inside the wires and electronics of the local ISP network. With online gaming, a signal between a gamer only has to cross the gamer’s local ISP network. Before online gaming, that signal had to pass through the local ISP network of both gamers.

There are advantages for gaming companies to move online. They can release a new title instantly to the whole country. Game companies don’t have to manufacture and distribute copies of games. Games can now be sold to gamers who can’t afford the expensive game boxes or computers. Gamers benefit because gaming can now be played on any device and a gamer isn’t forced into buying an expensive gaming computer and then only playing in that one location. Game companies can now sell a gaming experience that can be played from anywhere, not just sitting at a gamer’s computer.

A gaming stream is far more demanding on the network than a video stream from Netflix. Netflix feeds out the video signal in advance of what a viewer is watching, and the local TV or PC stores video content for the next few minutes of viewing. This was a brilliant move by video streamers because streaming ahead of where what viewers are watching largely eliminated the delays and pixelation of video streams that were common when Netflix was new. By streaming in advance of what a viewer is watching, Netflix has time to resend any missed packets so that the video viewing experience has ideal quality by the time a viewer catches up to the stream.

Gaming doesn’t have this same luxury because gaming is played in real time. The gamers at both ends of a game need to experience the game at the same time. This greatly changes the demand on the broadband network. Online gaming means a simultaneous stream being sent from a data center to both gamers, and it’s vital that both gamers receive the signal at the same time. Gaming requires a higher quality of download path than Netflix because there isn’t time to resend missed data packets. A gamer needs a quality downstream path to receive a quality video transmission in real-time.

Gaming adds a second big demand in that latency becomes critical. A player that receives signal just a little faster than an opponent has an advantage. A friend of mine has symmetrical gigabit Verizon FiOS fiber broadband at his home which is capable of delivering the best possible gaming data stream. Yet his son is driving his mother crazy by running category 6 cables between the gaming display and the FiOS modem. He sears that bypassing the home WiFi lowers the latency and gives him an edge over other gamers. From a gamer perspective, network latency is becoming possibly more important than download speed. A gamer on fiber has an automatic advantage over a gamer on a cable company network.

At the same time as the gaming experience has gotten more demanding for network operators the volume of gaming has exploded during the pandemic as people stuck at home have turned to gaming. All of the major game companies are reporting record earnings. The NPD Group that tracks the gaming industry reports that spending on gaming was up 30% in the second quarter of this year compared to 2019.

ISPs are already well aware of gamers who are the harshest critics of broadband network performance. Gamers understand that little network glitches, hiccups, and burps that other uses may not even notice can cost them a game, and so gamers closely monitor network performance. Most ISPs know their gamers who are the first to complain loudly about network problems.

FCC Further Defines Speed Tests

The FCC recently voted to tweak the rules for speed testing for ISPs who accept federal funding from the Universal Service Fund or from other federal funding sources. This would include all rate-of-return carriers including those taking ACAM funding, carriers that won the CAF II reverse auctions, recipients of the Rural Broadband Experiment (RBE) grants, Alaska Plan carriers, and likely carriers that took funding in the New York version of the CAF II award process. These new testing rules will also apply to carriers accepting the upcoming RDOF grants.

The FCC had originally released testing rules in July 2018 in Docket DA 18-710. Those rules applied to the carriers listed above as well as to all price cap carriers and recipients of the CAF II program. The big telcos will start testing in January of 2020 and the FCC should soon release a testing schedule for everybody else – the dates for testing were delayed until this revised order was issued.

The FCC made the following changes to the testing program:

  • Modifies the schedule for commencing testing by basing it on the deployment obligations specific to each Connect America Fund support mechanism;
  • Implements a new pre-testing period that will allow carriers to become familiar with testing procedures without facing a loss of support for failure to meet the requirements;
  • Allows greater flexibility to carriers for identifying which customer locations should be tested and selecting the endpoints for testing broadband connections. This last requirement sounds to me like the FCC is letting the CAF II recipients off the hook by allowing them to only test customers they know meet the 10/1 Mbps speeds.

The final order should be released soon and will hopefully answer carrier questions. One of the areas of concern is that the FCC seems to want to test the maximum speeds that a carrier is obligated to deliver. That might mean having to give customers the fastest connection during the time of the tests even if they have subscribed to slower speeds.

Here are some of the key provisions of the testing program that were not changed by the recent order:

  • ISPs can choose between three methods for testing. First, they may elect what the FCC calls the MBA program, which uses an external vendor, approved by the FCC, to perform the testing. This firm has been testing speeds for the network built by large telcos for many years. ISPs can also use existing network tools if they are built into the customer CPE that allows test pinging and other testing methodologies. Finally, an ISP can install ‘white boxes’ that provide the ability to perform the tests.
  • Testing, at least for now is perpetual, and carriers need to recognize that this is a new cost they have to bear due to taking federal funding.
  • The number of tests to be conducted will vary by the number of customers for which a recipient is getting support; With 50 or fewer households the test is for 5 customers; for 51-500 households the test is 10% of households. For 500 or more households the test is 50 households. ISPs declaring a high latency must test more locations with the maximum being 370.
  • Tests for a given customer are for one solid week, including weekends in each quarter. Tests must be conducted in the evenings between 6:00 PM and 12:00 PM. Latency tests must be done every minute during the six-hour testing window. Speed tests – run separately for upload speeds and download speeds – must be done once per hour during the 6-hour testing window.
  • ISPs are expected to meet latency standards 95% of the time. Speed tests must achieve 80% of the expected upland and download speed 80% of the time. An example of this requirement is that a carrier guaranteeing a gigabit of speed must achieve 800 Mbps 80% of the time. ISPs that meet the speeds and latencies for 100% of customers are excused from quarterly testing and only have to test once per year.
  • There are financial penalties for ISPs that don’t meet these tests.
  • ISPs that have between 85% and 100% of households that meet the test standards lose 5% of their FCC support.
  • ISPs that have between 70% and 85% of households that meet the test standards lose 10% of their FCC support.
  • ISPs that have between 55% and 75% of households that meet the test standards lose 15% of their FCC support.
  • ISPs with less than 55% of compliant households lose 25% of their support.
  • The penalties only apply to funds that haven’t yet been collected by an ISP.

Should Satellite Broadband be Subsidized?

I don’t get surprised very often in this industry, but I must admit that I was surprised by the amount of money awarded for satellite broadband in the reverse auction for CAF II earlier this year. Viasat, Inc., which markets as Exede, was the fourth largest winner, collecting $122.5 million in the auction.

I understand how Viasat won – it’s largely a function of the way that reverse auctions work. In a reverse auction, each bidder lowers the amount of their bid in successive rounds until only one bidder is left in any competitive situation. The whole pool of bids is then adjusted to meet the available funds, which could mean an additional reduction of what winning bidders finally receive.

Satellite providers, by definition, have a huge unfair advantage over every other broadband technology. Viasat was already in the process of launching new satellites – and they would have launched them with or without the FCC grant money. Because of that, there is no grant level too low for them to accept out of the grant process – they would gladly accept getting only 1% of what they initially requested. A satellite company can simply outlast any other bidder in the auction.

This is particularly galling since Viasat delivers what the market has already deemed to be inferior broadband. The download speeds are fast enough to satisfy the reverse auction at speeds of at least 12 Mbps. The other current satellite provider HughesNet offer speeds of at least 25 Mbps. The two issues that customers have with satellite broadband is the latency and the data caps.

By definition, the latency for a satellite at a 23,000 orbit is at least 476 ms (milliseconds) just to account for the distance traveled to and from the earth. Actual latency is often above 600 ms. The rule of thumb is that real-time applications like VoIP, gaming, or holding a connection at a corporate LAN start having problems when latency is greater than 100-150 ms.

Exede no longer cuts customers dead for the month once they reach the data cap, but they instead reduce speeds when the network is busy for any customer over the cap. Customer reviews say this can be extremely slow during prime times. The monthly data caps are small and range from $49.99 monthly for a 10 GB data cap to $99.95 per month for a 150 GB data cap. To put those caps into perspective, OpenVault recently reported that the average landline broadband household used 273.5 GB per month of data in the first quarter of 2019.

Viasat has to be thrilled with the result of the reverse auction. They got $122.5 million for something they were already doing. The grant money isn’t bringing any new option to customers who were already free to buy these products before the auction. There is no better way to say it other than Viasat got free money due to a loophole in the grant process. I don’t think they should have been allowed into the auction since they aren’t bringing any broadband that is not already available.

The bigger future issue is if the new low-earth orbit satellite companies will qualify for the future FCC grants, such as the $20.4 billion grant program starting in 2021. The new grant programs are also likely to be reverse auctions. There is no doubt that Jeff Bezos or Elon Musk will gladly take government grant money, and there is no doubt that they can underbid any landline ISP in a reverse auction.

For now, we don’t know anything about the speeds that will be offered by the new satellites. We know that they claim that latency will be about the same as cable TV networks at about 25 ms. We don’t know about data plans and data caps, although Elon Musk has hinted at having unlimited data plans – we’ll have to wait to see what is actually offered.

It would be a tragedy for rural broadband if the new (and old) satellite companies were to win any substantial amount of the new grant money. To be fair, the new low-orbit satellite networks are expensive to launch, with price tags for each of the three providers estimated to be in the range of $10 billion. But these companies are using these satellites worldwide and will be launching them with or without help from an FCC subsidy. Rural customers are going to best be served in the long run by having somebody build a network in their neighborhood. It’s the icing on the cake if they are also able to buy satellite broadband.

Why Offer Fast Data Speeds?

A commentor on an earlier blog asked a great question. They observed that most ISPs say that customer usage doesn’t climb when customers are upgraded to speeds faster than 50 Mbps – so why does the industry push for faster speeds? The question was prompted by the observation that the big cable companies have unilaterally increased speeds in most markets to between 100 Mbps to 200 Mbps. There are a lot of different answers to that question.

First, I agree with that observation and I’ve heard the same thing. The majority of households today are happy with a speed of 50 Mbps, and when a customer that already has enough bandwidth is upgraded they don’t immediately increase their downloading habits.

I’ve lately been thinking that 50 Mbps ought to become the new FCC definition of broadband, for exactly the reasons included in the question. This seems to be the speed today where most households can use the Internet in the way they want. I would bet that many households that are happy at 50 Mbps would no longer be happy with 25 Mbps broadband. It’s important to remember that just three or four years ago the same thing could have been said about 25 Mbps, and three or four years before that the same was true of 10 Mbps. One reason to offer faster speeds is to stay ahead of that growth curve. Household bandwidth and speed demand has been doubling every three years or so since 1980. While 50 Mbps is a comfortable level of home bandwidth for many today, in just a few years it won’t be.

It’s also worth noting that there are some households who need more than the 50 Mbps speeds because of the way they use the Internet. Households with multiple family members that all want to stream at the same time are the first to bump against the limitations of a data product. If ISPs never increase speeds above 50 Mbps, then every year more customers will bump against that ceiling and begin feeling frustrated with that speed. We have good evidence this is true by seeing customers leave AT&T U-verse, at 50 Mbps, for faster cable modem broadband.

Another reason that cable companies have unilaterally increased speeds is to help overcome customer WiFi issues. Customers often don’t care about the speed in the room with the WiFi modem, but care about what they can receive in the living room or a bedroom that is several rooms away from the modem. Faster download speeds can provide the boost needed to get a stronger WiFi signal through internal walls. The big cable companies know that increasing speeds cuts down on customer calls complaining about speed issues. I’m pretty sure that the cable companies will say that increasing speeds saves them money due to fewer customer complaints.

Another important factor is customer perception. I always tell people that if they have the opportunity, they should try a computer connected to gigabit speeds. A gigabit product ‘feels’ faster, particularly if the gigabit connection is on fiber with low latency. Many of us are old enough to remember that day when we got our first 1 Mbps DSL or cable modem and got off dial-up. The increase in speed felt liberating, which makes sense because a 1 Mbps DSL line is twenty times faster than dial-up, and also has a lower latency. A gigabit connection is twenty times faster than a 50 Mbps connection and seeing it for the first time has that same wow factor – things appear on the screen almost instantaneously as you hit enter. The human eye is really discerning, and it can see a big difference between loading the same web site at 25 Mbps and at 1 Gbps. The actual time difference isn’t very much, but the eye tells the brain that it is.  I think the cable companies have figured this out – why not give faster speeds if it doesn’t cost anything and makes customers happy?

While customers might not immediately use more broadband, I think increasing the speed invites them to do so over time. I’ve talked to a lot of people who have lived with inadequate broadband connections and they become adept at limiting their usage, just like we’ve all done for many years with cellular data usage. Rural families all know exactly what they can and can’t do on their broadband connection. For example, if they can’t stream video and do schoolwork at the same time, they change their behavior to fit what’s available to them. Even non-rural homes learn to do this to a degree. If trying to stream multiple video streams causes problems, customers quickly learn not to do it.

Households with fast and reliable broadband don’t give a second thought about adding an additional broadband application. It’s not a problem to add a new broadband device or to install a video camera at the front door. It’s a bit of the chicken and egg question – does fast broadband speeds promote greater broadband usage or does the desire to use more applications drive the desire to get faster speeds? It’s hard to know any more since so many homes have broadband speeds from cable companies or fiber providers that are set faster than what they need today.