Categories
Regulation - What is it Good For?

FCC Considers New Definition of Broadband

On November 1, the FCC released a Notice of Inquiry that asks about various topics related to broadband deployment. One of the first questions asked is if the definition of broadband should be increased to 100/20 Mbps. I’ve written about this topic so many times over the years that writing this blog almost feels like déjà vu. Suffice it to say that the current FCC with a newly installed fifth Commissioner finally wants to increase the definition of broadband to 100/20 Mbps.

The NOI asks if that definition is sufficient for the way people use broadband today. Of most interest to me is the discussion of the proposed 20 Mbps definition of upload speed. Anybody who follows the industry knows that the use of 20 Mbps to define upload speeds is a political compromise that is not based upon anything other than extreme lobbying by the cable industry to not set the number higher. The NOI cites studies that say that 20 Mbps is not sufficient for households with multiple broadband users, yet the FCC still proposes to set the definition at 20 Mbps.

There are some other interesting questions being asked by the NOI. The FCC asks if it should rely on its new BDC broadband maps to assess the state of broadband – as if they have an option. The answer to anybody who digs deep into the mapping data is a resounding no, since there are still huge numbers of locations where speeds claimed in the FCC mapping are a lot higher than what is being delivered. The decision by the FCC to allow ISPs to report marketing speeds doomed the maps to be an ISP marketing tool rather than any accurate way to measure broadband deployment. It’s not hard to predict a time in a few years when huge numbers of people start complaining about being missed by the BEAD grants because of the inaccurate maps. But the FCC has little choice but to stick with the maps it has heavily invested it.

The NOI asks if the FCC should set a longer-term goal for future broadband speeds, like 1 Gbps/500 Mbps. This ignores the more relevant question about the next change in definition that should come after 100/20 Mbps. According to OpenVault, over 80% of U.S. homes already subscribe to download speeds of 200 Mbps or faster, and that suggests that 100 Mbps download is already behind the market. The NOI should be discussing when the definition ought to be increased to 200 or 300 Mbps download instead of a theoretical future definition change.

Setting a future theoretical speed goal is a feel-good exercise to make it sound like FCC policy will somehow influence the forward march of technology upgrades. This is exactly the sort of thing that talking-head policy folks do when they create 5-year and 10-year broadband plans. But I find it impossible to contemplate that the FCC will change the definition of broadband to gigabit speeds in the next decade, because doing so would be saying that every home that doesn’t have a gigabit option would not have broadband. Without that possibility, setting a high target goal is largely meaningless.

The NOI also asks if the FCC should somehow consider latency and packet loss – and the answer is that of course they should. However, they can’t completely punt on the issue like they do today when FCC grants and subsidies only require a latency under 100 milliseconds and set no standards for packet loss. Setting latency requirements that everybody except high-orbit satellites can easily meet is like having no standard at all.

Of interest to rural folks is a long discussion in the NOI about raising the definition of cellular broadband from today’s paltry 5/1 Mbps. Mobile speeds in most cities have download speeds today greater than 150 Mbps, often faster. The NOI suggests that a definition of mobile broadband ought to be something like 35/3 Mbps – something that is far slower than what a urban folks can already receive. But talking about a definition of mobile broadband ignores that any definition of mobile broadband is meaningless in the huge areas of the country where there is practically no mobile broadband coverage.

One of the questions I find most annoying asks if the FCC should measure broadband success by the number of ISPs available at a given location. This is the area where the FCC broadband maps are the most deficient. I wrote a recent blog that highlighted that seven or eight of the ten ISPs that claim coverage at my house aren’t real broadband options. Absolutely nobody is analyzing or challenging the maps for ISPs in cities that claim coverage that is either slower than claimed or doesn’t exist. But it’s good policy fodder for the FCC to claim that many folks in cities have a dozen broadband options. If it were only so.

Probably the most important question asked in the NOI is what the FCC should do about the millions of homes that can’t afford broadband. The FCC asks if it should adopt a universal service goal. This question has activated the lobbyists of the big ISPs who are shouting that the NOI is proof that the FCC wants to regulate and lower broadband rates. The big ISPs don’t even want the FCC to compile and publish data that compares broadband penetration rates to demographic data and household incomes. This NOI is probably not the right forum to ask that question – but solving the affordability gap affects far more households than the rural availability gap.

I think it’s a foregone conclusion that the FCC will use the NOI to adopt 100/20 Mbps as the definition of broadband. After all, the FCC is playing catchup to Congress, which essentially reset the definition of broadband to 100/20 Mbps two years ago in the BEAD grant legislation. The bigger question is if the FCC will do anything meaningful with the other questions asked in the NOI.

Categories
Technology The Industry

Getting Ready for the Metaverse

In a recent article in LightReading, Mike Dano quotes Dan Rampton of Meta as saying that the immersive metaverse experience is going to require a customer latency between 10 and 20 milliseconds.

The quote came from a presentation at the Wireless Infrastructure Association Connect (WIAC) trade show. Dano says the presentation there was aimed at big players like American Tower and DigitalBridge, which are investing heavily in major data centers. Meta believes we need a lot more data centers closer to users to speed up the Internet and reduce latency.

Let me put the 10 – 20 millisecond latency into context. Latency in this case would be the total delay of signal between a user and the data center that is controlling the metaverse experience. Meta is talking about the network that will be needed to support full telepresence where the people connecting virtually can feel like they are together in real time. That virtual connection might be somebody having a virtual chat with their grandmother or a dozen people gaming.

The latency experienced by anybody connected to the Internet is the accumulation of a number of small delays.

  • Transmission delay is the time required to get packets from a customer to be ready to route to the Internet. This is the latency that starts at the customer’s house and traverses the local ISP network. This delay is caused to some degree by the quality of the routers at the home – but the biggest factor in transmission delay is related to the technology being used. I polled several clients who tell me the latency inside their fiber network typically ranges between 4 and 8 milliseconds. Some wireless technologies also have low latency as long as there aren’t multiple hops between a customer and the core. Cable HFC systems are slower and can approach the 20 ms limit, and older technologies like DSL have much larger latencies. Satellite latencies, even the low-orbit networks, will not be fast enough to meet the 20 ms goal established by Meta due to the signal having to travel from the ground to a satellite and back to the Internet interface.
  • Processing delay is the time required by the originating ISPs to decide where a packet is to be sent. ISPs have to sort between all of the packets received from users and route each appropriately.
  • Propagation delay is due to the distance a signal travels outside of the local network. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Baltimore and Washington DC.
  • Queuing delays are the time required at the terminating end of the transmission. Since a metaverse connection is almost certainly going to be hosted at a data center, this is the time it takes to receive and appropriately route the signal to the right place in the data center.

It’s easy to talk about the metaverse as if it’s some far future technology. But companies are currently investing tens of billions of dollars to develop the technology. The metaverse will be the next technology that will force ISPs to improve networks. Netflix and streaming video had a huge impact on cable and telephone company ISPs, which were not prepared to have multiple customers streaming video at the same time. Working and schooling from home exposed the weakness of the upload links in cable company, fixed wireless, and DSL networks. The metaverse will push ISPs again.

Meta’s warning is that ISPs will need to have an efficient network if they want their customers to participate in the metaverse. Packets need to get out the door quickly. Networks that are overloaded at some times of the day will cause enough delay to make a metaverse connection unworkable. Too much jitter will mean resending missed packets, which adds significantly to the delay. Networks with low latency like fiber will be preferred. Large data centers that are closer to users can shave time off the latency. Customers are going to figure this out quickly and migrate to ISPs that can support a metaverse connection (or complain loudly about ISPs that can’t). It will be curious to see if ISPs will heed the warnings coming from companies like Meta or if they will wait until the world comes crashing down on their heads (which has been the historical approach to traffic management).

Categories
Technology

Jitter – A Measure of Broadband Quality

Most people have heard of latency, which is a measure of the average delay of data packets on a network. There is another important measure of network quality that is rarely talked about. Jitter is the variance in the delays of signals being delivered through a broadband network connection. Jitter occurs when the latency increases or decreases over time.

We have a tendency in the industry to oversimplify technical issues. We take a speed test and assume the answer that pops out is our speed. Those same speed tests also measure latency, and even network engineers sometimes get mentally lazy and are satisfied to see an expected latency number on a network test. But in reality, the broadband signal coming into your home is incredibly erratic. From millisecond to millisecond, the amount of data hitting your home network varies widely. Measuring jitter means measuring the degree of network chaos.

Jitter increases when networks get overwhelmed, even temporarily. Delays are caused in any network when the amount of data being delivered exceeds what can be accepted. There are a few common causes of increased jitter:

·         Not Enough Bandwidth. Low bandwidth connections experience increased jitter when incoming packets exceed the capacity of the broadband connection. This effect can cascade and multiply when the network is overwhelmed – being overly busy increases jitter, and the worse jitter then makes it even harder to receive incoming packets.

·         Hardware Limitations. Networks can bog down when outdated routers, switches, or modems can’t fully handle the volume of packets. Even issues like old or faulty cabling can cause delays and increase jitter.

·         Network Handoffs. Any network bottlenecks are the most vulnerable point in the network. The most common bottleneck in all of our homes is the device that converts landline broadband into WiFi. Even a slight hiccup at a bottleneck will negatively impact performance in the entire network.

All of these factors help to explain why old technology like DSL performs even worse than might be expected. Consider a home that has a 15 Mbps download connection on DSL. If an ISP were to instead deliver a 15 Mbps connection on fiber, the same customer would see a significant improvement. A fiber connection would avoid the jitter issues caused by antiquated DSL hardware. We tend to focus on speeds, but a 100 Mbps connection on a fiber network will typically have a lot less jitter than a 100 Mbps connection on a cable company network. Customers who try a fiber connection for the first time commonly say that the network ‘feels’ faster – what they are noticing is the reduced jitter.

Jitter can be deadliest to real-time connections – most people aren’t concerned about jitter if means it takes a little longer to download a file. But increased jitter can play havoc with an important Zoom call or with maintaining a TV signal during a big sports event. It’s easiest to notice jitter when a real-time function hesitates or fails. Your home might have plenty of download bandwidth, and yet broadband connections still fail because small problems caused by jitter can accumulate to make the connection fail.

ISPs have techniques that can help to control jitter. One of the more interesting ones is to use a jitter buffer that grabs and holds data packets that arrive too quickly. It may not feel intuitive that slowing a network can improve quality. But recall that jitter is caused when there is a time delay between different packets in the same transmission. There is no way to make the slowest packets arrive any sooner – so slowing down the fastest ones increases the chance that Zoom call packets can be delivered evenly.

Fully understanding the causes of jitter in any specific network is a challenge because the causes can be subtle. It’s often hard to pinpoint a jitter problem because it can be here one millisecond and gone the next. But it’s something we should be discussing more. A lot of the complaints people have about their broadband connection are caused by too-high jitter.

Categories
Uncategorized

Gaming and Broadband Demand

Broadband usage has spiked across the US this year as students and employees suddenly found themselves working from home and needing broadband to connect to school and work servers. But there is another quickly growing demand for broadband coming from gaming.

We’ve had online gaming of some sort over the last decade, but gaming has not been data-intensive activity for ISPs. Until recently, the brains for gaming has been provided by special gaming computers or game boxes run locally by each gamer. These devices and the game software supplied the intensive video and sound experience and the Internet was only used to exchange game commands between gamers. Command files are not large and contain the same information that is exchanged between a game controller and a gaming computer. In the past, gamers would exchange the command files across the Internet, and local software would interpret and activate the commends being exchanged.

But the nature of online gaming is changing rapidly. Already, before the pandemic, game platforms had been migrating online. Game companies are now running the core software for games in a data center and not on local PCs or game consoles. The bandwidth path required between the data center core and a gamer is much larger than the command files that used to be exchanged since the data path now carries the full video and music signals as well as 2-way communications between gamers.

There is a big benefit of online gaming for gamers, assuming they have enough bandwidth to participate. Putting the gaming brains in a data center reduces the latency, meaning that game commands can be activated more quickly. Latency is signal delay, and the majority of the delay in any internet transmission happens inside the wires and electronics of the local ISP network. With online gaming, a signal between a gamer only has to cross the gamer’s local ISP network. Before online gaming, that signal had to pass through the local ISP network of both gamers.

There are advantages for gaming companies to move online. They can release a new title instantly to the whole country. Game companies don’t have to manufacture and distribute copies of games. Games can now be sold to gamers who can’t afford the expensive game boxes or computers. Gamers benefit because gaming can now be played on any device and a gamer isn’t forced into buying an expensive gaming computer and then only playing in that one location. Game companies can now sell a gaming experience that can be played from anywhere, not just sitting at a gamer’s computer.

A gaming stream is far more demanding on the network than a video stream from Netflix. Netflix feeds out the video signal in advance of what a viewer is watching, and the local TV or PC stores video content for the next few minutes of viewing. This was a brilliant move by video streamers because streaming ahead of where what viewers are watching largely eliminated the delays and pixelation of video streams that were common when Netflix was new. By streaming in advance of what a viewer is watching, Netflix has time to resend any missed packets so that the video viewing experience has ideal quality by the time a viewer catches up to the stream.

Gaming doesn’t have this same luxury because gaming is played in real time. The gamers at both ends of a game need to experience the game at the same time. This greatly changes the demand on the broadband network. Online gaming means a simultaneous stream being sent from a data center to both gamers, and it’s vital that both gamers receive the signal at the same time. Gaming requires a higher quality of download path than Netflix because there isn’t time to resend missed data packets. A gamer needs a quality downstream path to receive a quality video transmission in real-time.

Gaming adds a second big demand in that latency becomes critical. A player that receives signal just a little faster than an opponent has an advantage. A friend of mine has symmetrical gigabit Verizon FiOS fiber broadband at his home which is capable of delivering the best possible gaming data stream. Yet his son is driving his mother crazy by running category 6 cables between the gaming display and the FiOS modem. He sears that bypassing the home WiFi lowers the latency and gives him an edge over other gamers. From a gamer perspective, network latency is becoming possibly more important than download speed. A gamer on fiber has an automatic advantage over a gamer on a cable company network.

At the same time as the gaming experience has gotten more demanding for network operators the volume of gaming has exploded during the pandemic as people stuck at home have turned to gaming. All of the major game companies are reporting record earnings. The NPD Group that tracks the gaming industry reports that spending on gaming was up 30% in the second quarter of this year compared to 2019.

ISPs are already well aware of gamers who are the harshest critics of broadband network performance. Gamers understand that little network glitches, hiccups, and burps that other uses may not even notice can cost them a game, and so gamers closely monitor network performance. Most ISPs know their gamers who are the first to complain loudly about network problems.

Categories
Regulation - What is it Good For?

FCC Further Defines Speed Tests

The FCC recently voted to tweak the rules for speed testing for ISPs who accept federal funding from the Universal Service Fund or from other federal funding sources. This would include all rate-of-return carriers including those taking ACAM funding, carriers that won the CAF II reverse auctions, recipients of the Rural Broadband Experiment (RBE) grants, Alaska Plan carriers, and likely carriers that took funding in the New York version of the CAF II award process. These new testing rules will also apply to carriers accepting the upcoming RDOF grants.

The FCC had originally released testing rules in July 2018 in Docket DA 18-710. Those rules applied to the carriers listed above as well as to all price cap carriers and recipients of the CAF II program. The big telcos will start testing in January of 2020 and the FCC should soon release a testing schedule for everybody else – the dates for testing were delayed until this revised order was issued.

The FCC made the following changes to the testing program:

  • Modifies the schedule for commencing testing by basing it on the deployment obligations specific to each Connect America Fund support mechanism;
  • Implements a new pre-testing period that will allow carriers to become familiar with testing procedures without facing a loss of support for failure to meet the requirements;
  • Allows greater flexibility to carriers for identifying which customer locations should be tested and selecting the endpoints for testing broadband connections. This last requirement sounds to me like the FCC is letting the CAF II recipients off the hook by allowing them to only test customers they know meet the 10/1 Mbps speeds.

The final order should be released soon and will hopefully answer carrier questions. One of the areas of concern is that the FCC seems to want to test the maximum speeds that a carrier is obligated to deliver. That might mean having to give customers the fastest connection during the time of the tests even if they have subscribed to slower speeds.

Here are some of the key provisions of the testing program that were not changed by the recent order:

  • ISPs can choose between three methods for testing. First, they may elect what the FCC calls the MBA program, which uses an external vendor, approved by the FCC, to perform the testing. This firm has been testing speeds for the network built by large telcos for many years. ISPs can also use existing network tools if they are built into the customer CPE that allows test pinging and other testing methodologies. Finally, an ISP can install ‘white boxes’ that provide the ability to perform the tests.
  • Testing, at least for now is perpetual, and carriers need to recognize that this is a new cost they have to bear due to taking federal funding.
  • The number of tests to be conducted will vary by the number of customers for which a recipient is getting support; With 50 or fewer households the test is for 5 customers; for 51-500 households the test is 10% of households. For 500 or more households the test is 50 households. ISPs declaring a high latency must test more locations with the maximum being 370.
  • Tests for a given customer are for one solid week, including weekends in each quarter. Tests must be conducted in the evenings between 6:00 PM and 12:00 PM. Latency tests must be done every minute during the six-hour testing window. Speed tests – run separately for upload speeds and download speeds – must be done once per hour during the 6-hour testing window.
  • ISPs are expected to meet latency standards 95% of the time. Speed tests must achieve 80% of the expected upland and download speed 80% of the time. An example of this requirement is that a carrier guaranteeing a gigabit of speed must achieve 800 Mbps 80% of the time. ISPs that meet the speeds and latencies for 100% of customers are excused from quarterly testing and only have to test once per year.
  • There are financial penalties for ISPs that don’t meet these tests.
  • ISPs that have between 85% and 100% of households that meet the test standards lose 5% of their FCC support.
  • ISPs that have between 70% and 85% of households that meet the test standards lose 10% of their FCC support.
  • ISPs that have between 55% and 75% of households that meet the test standards lose 15% of their FCC support.
  • ISPs with less than 55% of compliant households lose 25% of their support.
  • The penalties only apply to funds that haven’t yet been collected by an ISP.
Categories
Regulation - What is it Good For?

Should Satellite Broadband be Subsidized?

I don’t get surprised very often in this industry, but I must admit that I was surprised by the amount of money awarded for satellite broadband in the reverse auction for CAF II earlier this year. Viasat, Inc., which markets as Exede, was the fourth largest winner, collecting $122.5 million in the auction.

I understand how Viasat won – it’s largely a function of the way that reverse auctions work. In a reverse auction, each bidder lowers the amount of their bid in successive rounds until only one bidder is left in any competitive situation. The whole pool of bids is then adjusted to meet the available funds, which could mean an additional reduction of what winning bidders finally receive.

Satellite providers, by definition, have a huge unfair advantage over every other broadband technology. Viasat was already in the process of launching new satellites – and they would have launched them with or without the FCC grant money. Because of that, there is no grant level too low for them to accept out of the grant process – they would gladly accept getting only 1% of what they initially requested. A satellite company can simply outlast any other bidder in the auction.

This is particularly galling since Viasat delivers what the market has already deemed to be inferior broadband. The download speeds are fast enough to satisfy the reverse auction at speeds of at least 12 Mbps. The other current satellite provider HughesNet offer speeds of at least 25 Mbps. The two issues that customers have with satellite broadband is the latency and the data caps.

By definition, the latency for a satellite at a 23,000 orbit is at least 476 ms (milliseconds) just to account for the distance traveled to and from the earth. Actual latency is often above 600 ms. The rule of thumb is that real-time applications like VoIP, gaming, or holding a connection at a corporate LAN start having problems when latency is greater than 100-150 ms.

Exede no longer cuts customers dead for the month once they reach the data cap, but they instead reduce speeds when the network is busy for any customer over the cap. Customer reviews say this can be extremely slow during prime times. The monthly data caps are small and range from $49.99 monthly for a 10 GB data cap to $99.95 per month for a 150 GB data cap. To put those caps into perspective, OpenVault recently reported that the average landline broadband household used 273.5 GB per month of data in the first quarter of 2019.

Viasat has to be thrilled with the result of the reverse auction. They got $122.5 million for something they were already doing. The grant money isn’t bringing any new option to customers who were already free to buy these products before the auction. There is no better way to say it other than Viasat got free money due to a loophole in the grant process. I don’t think they should have been allowed into the auction since they aren’t bringing any broadband that is not already available.

The bigger future issue is if the new low-earth orbit satellite companies will qualify for the future FCC grants, such as the $20.4 billion grant program starting in 2021. The new grant programs are also likely to be reverse auctions. There is no doubt that Jeff Bezos or Elon Musk will gladly take government grant money, and there is no doubt that they can underbid any landline ISP in a reverse auction.

For now, we don’t know anything about the speeds that will be offered by the new satellites. We know that they claim that latency will be about the same as cable TV networks at about 25 ms. We don’t know about data plans and data caps, although Elon Musk has hinted at having unlimited data plans – we’ll have to wait to see what is actually offered.

It would be a tragedy for rural broadband if the new (and old) satellite companies were to win any substantial amount of the new grant money. To be fair, the new low-orbit satellite networks are expensive to launch, with price tags for each of the three providers estimated to be in the range of $10 billion. But these companies are using these satellites worldwide and will be launching them with or without help from an FCC subsidy. Rural customers are going to best be served in the long run by having somebody build a network in their neighborhood. It’s the icing on the cake if they are also able to buy satellite broadband.

Categories
What Customers Want

Why Offer Fast Data Speeds?

A commentor on an earlier blog asked a great question. They observed that most ISPs say that customer usage doesn’t climb when customers are upgraded to speeds faster than 50 Mbps – so why does the industry push for faster speeds? The question was prompted by the observation that the big cable companies have unilaterally increased speeds in most markets to between 100 Mbps to 200 Mbps. There are a lot of different answers to that question.

First, I agree with that observation and I’ve heard the same thing. The majority of households today are happy with a speed of 50 Mbps, and when a customer that already has enough bandwidth is upgraded they don’t immediately increase their downloading habits.

I’ve lately been thinking that 50 Mbps ought to become the new FCC definition of broadband, for exactly the reasons included in the question. This seems to be the speed today where most households can use the Internet in the way they want. I would bet that many households that are happy at 50 Mbps would no longer be happy with 25 Mbps broadband. It’s important to remember that just three or four years ago the same thing could have been said about 25 Mbps, and three or four years before that the same was true of 10 Mbps. One reason to offer faster speeds is to stay ahead of that growth curve. Household bandwidth and speed demand has been doubling every three years or so since 1980. While 50 Mbps is a comfortable level of home bandwidth for many today, in just a few years it won’t be.

It’s also worth noting that there are some households who need more than the 50 Mbps speeds because of the way they use the Internet. Households with multiple family members that all want to stream at the same time are the first to bump against the limitations of a data product. If ISPs never increase speeds above 50 Mbps, then every year more customers will bump against that ceiling and begin feeling frustrated with that speed. We have good evidence this is true by seeing customers leave AT&T U-verse, at 50 Mbps, for faster cable modem broadband.

Another reason that cable companies have unilaterally increased speeds is to help overcome customer WiFi issues. Customers often don’t care about the speed in the room with the WiFi modem, but care about what they can receive in the living room or a bedroom that is several rooms away from the modem. Faster download speeds can provide the boost needed to get a stronger WiFi signal through internal walls. The big cable companies know that increasing speeds cuts down on customer calls complaining about speed issues. I’m pretty sure that the cable companies will say that increasing speeds saves them money due to fewer customer complaints.

Another important factor is customer perception. I always tell people that if they have the opportunity, they should try a computer connected to gigabit speeds. A gigabit product ‘feels’ faster, particularly if the gigabit connection is on fiber with low latency. Many of us are old enough to remember that day when we got our first 1 Mbps DSL or cable modem and got off dial-up. The increase in speed felt liberating, which makes sense because a 1 Mbps DSL line is twenty times faster than dial-up, and also has a lower latency. A gigabit connection is twenty times faster than a 50 Mbps connection and seeing it for the first time has that same wow factor – things appear on the screen almost instantaneously as you hit enter. The human eye is really discerning, and it can see a big difference between loading the same web site at 25 Mbps and at 1 Gbps. The actual time difference isn’t very much, but the eye tells the brain that it is.  I think the cable companies have figured this out – why not give faster speeds if it doesn’t cost anything and makes customers happy?

While customers might not immediately use more broadband, I think increasing the speed invites them to do so over time. I’ve talked to a lot of people who have lived with inadequate broadband connections and they become adept at limiting their usage, just like we’ve all done for many years with cellular data usage. Rural families all know exactly what they can and can’t do on their broadband connection. For example, if they can’t stream video and do schoolwork at the same time, they change their behavior to fit what’s available to them. Even non-rural homes learn to do this to a degree. If trying to stream multiple video streams causes problems, customers quickly learn not to do it.

Households with fast and reliable broadband don’t give a second thought about adding an additional broadband application. It’s not a problem to add a new broadband device or to install a video camera at the front door. It’s a bit of the chicken and egg question – does fast broadband speeds promote greater broadband usage or does the desire to use more applications drive the desire to get faster speeds? It’s hard to know any more since so many homes have broadband speeds from cable companies or fiber providers that are set faster than what they need today.

Categories
The Industry

One-Web Launches Broadband Satellites

Earlier this month OneWeb launched six test satellites intended for an eventual satellite fleet intended to provide broadband. The six satellites were launched from a Soyuz launch vehicle from the Guiana Space Center in Kourou, French Guiana.

OneWeb was started by Greg Wyler of Virginia in 2012, originally under the name of WorldVu. Since then the company has picked up heavy-hitter investors like Virgin, Airbus, SoftBank and Qualcomm. The company’s plan is to launch an initial constellation of 650 satellites that will blanket the earth, with ultimate deployment of 1,980 satellites. The plans are to deploy thirty of the sixty-five pound satellites with each launch. That means twenty-two successful launches are needed to deploy the first round.

Due to the low-earth orbits of the satellites, at about 745 miles above earth, the OneWeb satellites will avoid the huge latency that is inherent from current satellite broadband providers like HughesNet, which uses satellites orbiting at 22,000 miles above the earth. The OneWeb specifications filed with the FCC talks about having latency in the same range as cable TV networks in the 25-30 millisecond range. But where a few high-orbit satellites can see the whole earth, the big fleet of low-orbit satellites is needed just to be able in see everywhere.

The company is already behind schedule. The company had originally promised coverage across Alaska by the end of 2019. They are now talking about having customers demos sometime in 2020 with live broadband service in 2021. The timeline matter for a satellite company because the bandwidth license from the FCC requires that they launch 50% of their satellites within six years and all of them within nine years. Right now, OneWeb and also Elon Musk’s SpaceX have both fallen seriously behind the needed deployment timeline.

The company’s original goal was to bring low-latency satellite broadband to everybody in Alaska. While they are still talking about bringing broadband to those who don’t have it today, their new business plan is to sell directly to airlines and cruise ship lines and to sell wholesale to ISPs who will then market to the end user.

It will be interesting to see what kinds of speeds will really be delivered. The company talks today about a maximum speed of 500 Mbps. But I compare that number to the claim that 5G cellphones can work at 600 Mbps, as demonstrated last year by Sprint – it’s possible only in a perfect lab setting. The best analog to a satellite network is a wireless transmitter on a tower in a point-to-multipoint network. That transmitter is capable of making a relatively small number of big-bandwidth connections or many more low-bandwidth connections. The economic sweet spot will likely be to offer many connections at 50 – 100 Mbps rather than fewer connections at a higher speed.

It’s an interesting business model. The upfront cost of manufacturing and launching the satellites is high. It’s likely that a few launches will go awry and destroy satellites. But other than replacing satellites that go bad over time, the maintenance costs are low. The real issue will be the bandwidth that can be delivered. Speeds of 50 – 100 Mbps will be welcomed in the rural US for those with no better option. But like with all low-bandwidth technologies – adequate broadband that feels okay today will feel a lot slower in a decade as household bandwidth demand continues to grow. The best long-term market for the satellite providers will be those places on the planet that are not likely to have a landline alternative – which is why they first targeted rural Alaska.

Assuming that the low-earth satellites deliver as promised, they will become part of the broadband landscape in a few years. It’s going to be interesting to see how they play in the rural US and around the world.

Categories
The Industry

Verizon’s Case for 5G, Part 3

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. In this blog I want to examine the two claims based upon improved latency – gaming and stock trading.

The 5G specification sets a goal of zero latency for the connection from the wireless device to the cellular tower. We’ll have to wait to see if that can be achieved, but obviously the many engineers that worked on the 5G specification think it’s possible. It makes sense from a physics perspective – a connection of a radio signal through air travels for all practical purposes at the speed of light (there is a miniscule amount of slowing from interaction with air molecules). This makes a signal through the air slightly faster than one through fiber since light slows down when passing through fiberglass by 0.83 milliseconds for every hundred miles of fiber optic cable traversed.

This means that a 5G signal will have a slight latency advantage over FTTP – for the first few connection from a customer. However, a 5G wireless signal almost immediately hits a fiber network at a tower or small cell site in a neighborhood, and from that point forward the 5G signal experiences the same latency as an all-fiber connection.

Most of the latency in a fiber network comes from devices that process the data – routers, switches and repeaters. Each such device in a network adds some delay to the signal – and that starts with the first device, be it a cellphone or a computer. In practical terms, when comparing 5G and FTTP the network with the fewest hops and fewest devices between a customer and the internet will have the lowest latency – a 5G network might or might not be faster than an FTTP network in the same neighborhood.

5G does have a latency advantage over non-fiber technologies, but it ought to be about the same advantage enjoyed by FTTP network. Most FTTP networks have latency in the 10-millisecond range (one hundredth of a second). Cable HFC networks have latency in the range of 25-30 ms; DSL latency ranges from 40-70 ms; satellite broadband connections from 100-500 ms.

Verizon’s claim for improving the gaming or stock trading connection also implies that the 5G network will have superior overall performance. That brings in another factor which we generally call jitter. Jitter is the overall interference in a network that is caused by congestion. Any network can have high or low jitter depending upon the amount of traffic the operator is trying to shove through it. A network that is oversubscribed with too many end users will have higher jitter and will slow down – this is true for all technologies. I’ve had clients with first generation BPON fiber networks that had huge amounts of jitter before they upgraded to new FTTP technology, so fiber (or 5G) alone doesn’t mean superior performance.

The bottom line is that a 5G network might or might not have an overall advantage compared to a fiber network in the same neighborhood. The 5G network might have a slight advantage on the first connection from the end user, but that also assumes that cellphones are more efficient than PCs. From that point forward, the network with the fewest hops to the Internet as well the network with the least amount of congestion will be faster – and that will be case by case, neighborhood by neighborhood when comparing 5G and FTTP.

Verizon is claiming that the improved latency will improve gaming and stock trading. That’s certainly true where 5G competes against a cable company network. But any trader that really cares about making a trade a millisecond faster is already going to be on a fiber connection, and probably one that sits close to a major internet POP. Such traders are engaging in computerized trading where a person is not intervening in the trade decision. For any stock trades that involve humans, a extra few thousandths of a second in executing a trade is irrelevant since the human decision process is far slower than that (for someone like me these decisions can be measured in weeks!).

Gaming is more interesting. I see Verizon’s advantage for gaming in making game devices mobile. If 5G broadband is affordable (not a given) then a 5G connection allows a game box to be used anywhere there is power. I think that will be a huge hit with the mostly-younger gaming community. And, since most homes buy broadband from the cable company, lower latency with 5G ought to be to a gamer using a cable network, assuming the 5G network has adequate upload speeds and low jitter. Gamers who want a fiber-like experience will likely pony up for a 5G gaming connection if it’s priced right.

Categories
Technology

Standards for 5G

Despite all of the hype that 5G is right around the corner, it’s important to remember that there is not yet a complete standard for the new technology.

The industry just took a big step on February 22 when the ITU released a draft of what it hopes is the final specification for 5G. The document is heavy in engineering detail and is not written for the layman. You will see that the draft talks about a specification for ‘IMT-2020’ which is the official name of 5G. The goal is for this draft to be accepted at a meeting of the ITU-R Study Group in November.

This latest version of the standard defines 13 metrics that are the ultimate goals for 5G. A full 5G deployment would include all of these metrics. What we know that we will see is commercial deployments from vendors claiming to have 5G, but which will actually meet only some parts of a few of these metrics. We saw this before with 4G, and the recent deployment of LTE-U is the first 4G product that actually meets most of the original 4G standard. We probably won’t see a cellular deployment that meets any of the 13 5G metrics until at least 2020, and it might be five to seven more years after that until fully compliant 5G cellular is deployed.

The metric that is probably the most interesting is the one that establishes the goal for cellular speeds. The goals of the standard are 100 Mbps download and 50 Mbps upload. Hopefully this puts to bed the exaggerated press articles that keep talking about gigabit cellphones. And even should the technology meet these target speeds, in real life deployment the average user is probably only going to receive half those speeds due to the fact that cellular speeds decrease rapidly with distance from a cell tower. Somebody standing right next to a cell tower might get 100 Mbps, but even as close as a mile away the speeds will be considerably less.

Interestingly, these speed goals are not much faster than is being realized by LTE-U today. But the new 5G standard should provide for more stable and guaranteed data connections. The standard is for a 5G cell site to be able to connect to up to 1 million devices per square kilometer (a little more than a third of a square mile). This, plus several other metrics, ought to result in stable 5G cellular connections – which is quite different than what we are used to with 4G connections. The real goal of the 5G standard is to provide connections to piles of IoT devices.

The other big improvement over 4G are the expectations for latency. Today’s 4G connections have data latencies as high as 20 ms, which accounts for most problems in loading web pages or watching video on cellphones. The new standard is 4 ms latency, which would improve cellular latency to around the same level that we see today on fiber connections. The new 5G standard for handing off calls between adjoining cell sites is 0 ms, or zero delay.

The standard increases the demand potential capacity of cell sites and provides a goal for the ability of a cell site to process peak data rates of 20 Gbps down and 10 Gbps up. Of course, that means bringing a lot more bandwidth to cell towers and only extremely busy urban towers will ever need that much capacity. Today the majority of fiber-fed cell towers are fed with 1 GB backbones that are used to satisfy upload and download combined. We are seeing cellular carriers inquiring about 10 GB backbones, and we need a lot more growth to meet the capacity built into the standard.

There are a number of other standards. Included is a standard requiring greater energy efficiency, which ought to help save on handset batteries – the new standard allows for handsets to go to ‘sleep’ when not in use. There is a standard for peak spectral efficiency which would enable 5G to much better utilize existing spectrum. There are also specifications for mobility that extend the goal to be able to work with vehicles going as fast as 500 kilometers per hour – meaning high speed trains.

Altogether the 5G standard improves almost every aspect of cellular technology. It calls for more robust cell sites, improved quality of the data connections to devices, lower energy requirements and more efficient hand-offs. But interestingly, contrary to the industry hype, it does not call for gigantic increases of cellular handset data speeds compared to a fully-compliant 4G network. The real improvements from 5G are to make sure that people can get connections at busy cell sites while also providing for huge numbers of connections to smart cars and IoT devices. A 5G connection is going to feel faster because you ought to almost always be able to make a 5G connection, even in busy locations, and that the connection will have low latency and be stable, even in moving vehicles. It will be a noticeable improvement.

Exit mobile version