Our Degrading Networks

cheetah-993774Lately I’ve been hearing a lot of stories about rural broadband with a common theme. People say that their broadband has been okay for years and is now suddenly terrible. This seems to be happening more on DSL networks than with other technologies, but you hear this about rural cable networks as well.

There are several issues which contribute to the problem – more customers sharing a local network, increasing data usage for the average customer, and a data backbone feeding the neighborhood that is has grown too small for the current usage.

Broadband adoption rates have continued to grow as more and more households find it mandatory to use broadband. And so neighborhoods that once had 50% of homes using a local network will have grown to more than 70%. That alone can stress a local network.

Household broadband usage has also been increasing. A lot of the new usage is streaming video. This video doesn’t just come from Netflix but there is now video all over the web and social media. It’s hard to go to the web today and not encounter video. As more and more customers are using video at the same time they can quickly be asking for more aggregate data in a network than the network can supply. Where the demand has outstripped network capability there is a remedy available for most situations and increasing the size of the bandwidth pipe feeding a neighborhood will typically fix the problem.

Let’s look at an example. Consider a neighborhood that has 100 DSL customers and that is fed by a DS3 (45 Mbps). In the days before a lot of streaming video such a neighborhood probably felt like it had good broadband. The odds against more than a few customers trying to download something really large at exactly the same time meant that there was almost always enough bandwidth for everybody.

But today people want to watch streaming video. Netflix recommends that there be at least a 1.5 Mbps continuous stream available to watch a video. So up to about 30 households in this theoretical neighborhood could watch Netflix at the same time. That math is not quite that linear as I will explain below, but you can see how the math works. The problem is that it’s not hard to imagine that with 100 homes that there would be demand for more than 30 video streams at the same time, particularly when considering that some households want to watch more than one Netflix stream at the same time.

The problems in this theoretical neighborhood are made worse by what is called packet loss. Packet loss occurs when a network tries to download multiple signals at the same time. When that happens some packets are accepted, but some are just lost. Our current web protocols correct this problem by sending out a message from the receiving router asking for the retransmission of missing packets, and they are sent again. As networks get busy the amount of contention and packet loss increases and the percentage of the packets that are sent multiple times increases. And so as networks get busy they grow increasingly less efficient. Where this theoretical neighborhood network can theoretically accommodate 30 Netflix streams, in real life it might actually only handle 20 due to the extra traffic caused by resending lost packets.

This theoretical network has grown over time from being efficient to now being totally inadequate. Customers who were once happy with speeds are now unable to watch Netflix on an average evening. The network will still function great at 4:00 AM when nobody is trying to use it, but during the times when people want use it, it will fail more often than not. The only way to fix this theoretical neighborhood is increase the backbone from 45 Mbps to something much larger. And that requires capital – and we all know that the large telcos are not putting capital into copper neighborhoods.

Cellular companies have been dealing with these growth issues for a number of years now. Cellular networks are seeing annual growth between 60% and 120% per year, meaning that any improvement in the network is quickly eaten up by increased demand. But t’s a much bigger issue to keep upgrading all landline networks. While there are just over 200,000 cell towers in the US there must be several million local broadband backbone connections into neighborhoods. These range from tiny backbones with a few T1s feeding a few homes up to networks with a few hundred people sharing a larger backbone. Upgrading that many networks backbone connection means a huge capital outlay is needed to maintain acceptable levels of service.

Unfortunately my theoretical neighborhood is not really all that theoretical. The big increase in landline broadband demand is now starting to max out the bandwidth utilization in many neighborhoods. The FCC says that there are 34 million people in the country that don’t have adequate broadband today. But with the rate that neighborhood networks are degrading, that number of households with inadequate broadband is growing rapidly – and not get smaller as the FCC is hoping.

5 thoughts on “Our Degrading Networks

      • This is why I described the situation as a crisis in my eBook “Service Unavailable: America’s Telecommunications Infrastructure Crisis.” We have very fast growing bandwidth demand for premise (and mobile) IP-based services that is tracking Moore’s Law, but no real plan to meet that demand.

        Many may disagree with my proposed solution of a crash federal public works program to build universal FTTP (which will also benefit mobile). But I don’t see the private sector addressing the crisis on its own. It also naturally focuses on discrete metro markets and not on providing a national solution.

  1. Thanks for a post on the impacts of oversubscription. How much does speed throttling affect packet loss? I’ve heard arguments that tiered bandwidth limiting, say to 10 Mbps, from a 1 Gbps backhaul significantly contributes to packet loss and related network inefficiencies. A burst of packets is initially sent, but most are dropped due to the throttling at the CPE.

    Shouldn’t we really have a different pricing model that is not speed-based, but based on usage? Then the economics of customer, ISP and backhaul providers are aligned.

Leave a Reply