Our Internet Infrastructure

Paul Barford, a professor at the University of Wisconsin, led an effort to map the major routes used by the Internet in the U.S. He believes that making knowledge of the map can help us plan better to make the Internet less susceptible to natural disasters, accidents, or intentional sabotage.

I can remember two times when the Internet backbone took a serious hit in this country and they were both in 2001. First, a 60-car CSX train derailed in the Howard Street tunnel in Baltimore, the resulting fire melted a lot of fiber cables that were on the east coast north-south route. Then later that year on 9/11, the twin World Trade Towers collapsed taking out the main carriers’ hotel and data center in Manhattan.

And there is no reason to think that we won’t have more disasters. When you look at the map, my first reaction is how few routes there are in the main backbone.

map_of_internets_backbonex519

 

Professor Barford hopes the map will spur conversation about the need for more route diversity. The Department of Homeland Security agrees and is publishing the map and making the details of the routes available to government, public, and private researchers.

Some might say that publishing such a map makes us more vulnerable. I don’t think it does. Everybody in the industry knows the addresses of the main Internet POPs since those are the end points of the data connections that ISPs buy to connect to the Internet. And I didn’t really need this map to know that the major routes of fiber mostly follow the Interstate highways. In Florida, where I live, there is a route on I-95 on one side of the state and I-75 on the other with a spur to Orlando. I doubt that anybody here in the industry didn’t already know that.

The one thing that strikes me about the map is that once you get off the major big city routes that many of the smaller US markets only have one route into and out of their hub; it doesn’t look that hard to isolate some markets with a couple of fiber cuts. I know that some of the carriers involved in the backbone have contingency plans that don’t show up on this map, and there are other fiber routes that can pick up the slack fairly soon after a major Internet outage in most places.

The other thing you realize about this network is that it wasn’t really designed—it grew organically. The network takes the shortest path between major markets using major roads and thus follows the routes built by the first fiber pioneers in the 80s and early 90s.

Hopefully this map spurs the carriers to get together and plan a more robust backbone going into the future. It’s very easy to get complacent about a network that is functioning, but this map highlights a number of vulnerable points in the network that could be improved. This kind of planning was undertaken by the large electric grids after a number of power outages a decade ago. Let’s not wait for major Internet outages to get us to pay attention to making the network safer and more redundant.

 

Is There a Web Video Crisis? – Part I

The InternetThe whole net neutrality issue has been driven by the fact that companies like Comcast and Verizon want to charge large content providers like NetFlix to offset some of their network cost to carry their videos. Comcast implies that without such payments that NetFlix content will have trouble making it to customers. By demanding such payments Comcast is saying that their network is having trouble carrying video, meaning that there is a video crisis on the web or on the Comcast network.

But is there? Certainly video is king and constitutes the majority of traffic on the web today. And the amount of video traffic is growing rapidly as more customers watch video on the web. But everybody has known for years that this is coming and Comcast can’t be surprised that it is being asked to deliver video to people.

Let’s look at this issue first from the edge backwards. Let’s say that on average in a metro area that Comcast has sold a 20 Mbps download to each of its customers. Some buy slower or faster speeds than that, but every one of Comcast’s products is fast enough to carry streaming video. Like all carriers Comcast does something called oversubscription, meaning that they sell more access to customers than their network can supply. But in doing so they are banking on the fact that everybody won’t watch video at the same time. And they are right, it never happens. I have a lot of clients with broadband networks and I can’t think of one of them who has been overwhelmed in recent years by demands from customers on the edge. Those edge networks ought to be robust enough to deliver the speeds that are sold to customers. That is the primary thing customers are paying for.

So Comcast’s issues must be somewhere else in the network, because their connections to customers ought to robust enough to deliver video to a lot of people at the same time. One place that could be a problem is the Internet backbone. This is the connection between Comcast and the Internet. I have no idea how Comcast manages this, but I know how hundreds of smaller carriers do it. They generally buy enough capacity so that they are rarely use more than some base amount like 60% of the backbone. By keeping a comfortable overhead on the Internet pipe they are ready for those rare days when usage bursts much higher. And if they do get too busy they usually have the ability to burst above their proscribed bandwidth limits to satisfy customer demands. This costs them more, but the capacity is available to them without them having to ask for it.

So one would not think that the issue for Comcast is their connection to the Internet. They ought to be sizing this according to the capacity that they are selling in aggregate to all of the end users. The price of backbone has been dropping steadily for years and the price that their customers pay them for bandwidth should be sufficient for them to make the backbone robust enough.

That only leaves one other part of the network, which is what we refer to as distribution. This is the fiber connections that go between a headend or a hub out to neighborhoods. Certainly these connections have gotten larger over time and I would assume that like all carriers that Comcast has had to increase capacity in the distribution plant. Where a neighborhood might have once been perfectly fine sharing a gigabit of data they might now need ten gigabits. That kind of upgrade means upgrading to a larger laser on the fiber connection between the Comcast headend and neighborhood nodes.

Again, I would think that the prices that customers pay ought to cover the cost of the distribution network, just as it ought to cover the edge network and the backbone network. Comcast has been unilaterally increasing speeds to customers over time. They come along and periodically increase speeds for customers, say from 10 to 15 Mbps. One would assume that they would only increase speeds if they have the capacity to actually deliver those new higher speeds.

From the perspective of looking at the base components of the network – the edge, the backbone and the distribution, I can’t see where Comcast should be having a problem. The prices that customers pay ought to be more than sufficient to make sure that those three components are robust enough to deliver what Comcast is selling to customers. If it’s not, then Comcast has oversold their capacity, which sounds like their issue and not NetFlix.

In the next article in this series I will look at other issues such as caching as possible reasons for why Comcast needs extra payments from NetFlix. Because it doesn’t appear to me that NetFlix ought to be responsible for the way Comcast builds their own networks. One would think that their networks are built to generally deliver the bandwidth to customers they have paid for regardless of where on the web that bandwidth is coming from.