In the first part of this series I looked at the three areas of the customer network – the edge network, the distribution network and the Internet backbone. I came to the conclusion that if Comcast and Verizon operate the same way as the hundreds of carriers that I work for that the fees paid by end user customers ought to be sufficient to cover the costs of those portions of the network and to ensure that the network is robust enough to cover video. It seems to me that nobody but Comcast and Verizon seems to have a need to charge for an Internet ‘fast lane’.
But those three network components are not the entire Internet network, so to be fair to Comcast and Verizon there are a few other places to look. In this blog I will consider what happens when a lot of video hits the web at the same time. Let’s see if this might be the reason Comcast needs an Internet fast lane.
There are two different ways that video traffic can be larger than normal on web. The first is when there is a major event simulcast on the web. Simulcast is when a video is sent to many locations at exactly the same time. The granddaddy of such events is the Superbowl. But there are a lot of other big events like the Olympics and the soccer World Cup. In those instances there are a whole lot of people watching the same event. Simulcast doesn’t always involve sports and one of the more recent web crashes was during the finale of True Detective on HBO Go.
There have been a few major crashes in the past during simulcast events and as often as not the problem has been at the programmer’s server which received more requests for signals than it could handle. But considering simulcast highlights another part of the Internet – the servers, switches and routers used to send, route and receive traffic over the web. These devices are the routing core of the Internet and are found today at large data centers. It certainly is possible for these devices to get overwhelmed. In the past when there have been web crashes it was mostly likely these devices and not the fiber data network that got overwhelmed by video
On a per customer basis the servers, routers and switches are the least expensive part of the Internet network. This is not to say that they are cheap, but they cost a lot less than building fiber networks. As mentioned above, the point of stress on simulcast video are the originating servers, and thus it would be incredibly cynical of Comcast to claim that they need to charge a premium price to NetFlix because they don’t have enough servers and routers to handle the traffic. Their terminating routers ought to be sufficient and ready to handle large volumes of videos as a normal course of business.
The other way that web video traffic can get big is when a lot of people are watching video and each one of them is watching something different. Today people watch what they want when they want and this is the primary way that the web handles video. But there are times when usage is greater than normal, and perhaps this is what drives the need for a fast lane.
Broadcasters like NetFlix have helped to ameliorate the affects of large video volumes by caching. For example NetFlix will put a caching server at any large headend at their own cost to cut down on the stress on the web. A NetFlix caching server will contain a copy of all of the programming that NetFlix predicts that people will most want to watch. Anybody who then watches one of these shows initiates the program from the local caching server rather than making a new web request back at the NetFlix hubs. I would have to assume that NetFlix has provided numerous caching servers to Comcast and Verizon, so this cannot be a reason to charge more for a fast lane.
But caching doesn’t always solve large demand. First, a NetFlix caching device only contains what NetFlix predicts will be popular, and if something else they host goes viral it won’t be on their caching server. But more importantly, there is a ton of video content on the web that is not going to be on these kinds of caching servers. If some video from Facebook or YouTube goes vital it is likely not to be already cached because nobody could have predicted it would go viral.
But there is a new technology that should solve the caching issue. Cisco and other smaller companies like PeerApp and Qwilt have introduced a technology called transparent caching. This technology caches content on the fly. If more than two users in a network ask to see the same content it makes a local copy of that content. Within minutes of teens loving some new YouTube video it would be cached locally and would stay in the cache until demand for it stops. This technology will drastically reduce the requests back to the originating servers at providers like NetFlix and YouTube.
My conclusion of this discussion is that I find a hard time seeing where Comcast or Verizon can claim that their routers, switches and servers are inadequate to handle the traffic from NetFlix. These are one of the cheaper components of the web on a per customer basis and they ought to have adequate resources to handle simulcasts or viral videos. Even if they don’t, the new technology of transparent caching promises to drastically reduce the web traffic associated with video since any popular content will be automatically locally cached.