Tariffs and Broadband Deployment

A number of my clients are receiving letters from telecom supply houses and vendors warning them of price increases due to the tariffs recently imposed on trade with China. It’s no secret in the telecom world that much of the electronics and components used to build fiber or fixed wireless networks come from China.

The following list is from a letter sent from Power & Tel, a big telecom supply house to their customers. Other supply houses and vendors are sending similar notices. This notice lists examples of components that will receive the new tariff additives. As is usual in these situations there will be components that are in gray areas and it will take a while for the vendors to figure out the full tariff impact.

The new tariffs were imposed by the U.S. Trade Representatives (USTR) at the order of the President and are implemented by the U.S. Customs and Border Protection agencies. There have been multiple USTR lists of affected products, and following is Power & Tel’s take on the various tariff actions:

USTR Tariff List 1 – 25% tariff effective on July 6, 2018. Affects optical fiber cables, aluminum, copper, steel & iron.

USTR Tariff List 2 – 25% tariff effective on August 23, 2018. Affects fiber adapters, connectors, splice sleeves, grounding hardware.

USTR Tariff List 3 – 10% tariff effective on September 24, 2018. On January 1, 2019 the tariff increases to 25%. Affects electronics, power cables, active optical cable, direct attach cables, cable management and racks, batteries, power supplies, metal hand tools, power tools, hardware.

Included in this list are several major components that are part of every broadband deployment. This includes things like:

  • Core routers and switches for fiber and wireless networks
  • Core electronics and customer ONTs for FTTP
  • Core electronics and customer radios for fixed wireless
  • The core of central offices and huts including racks, batteries, power supplies, grounding hardware, cables, hardware, test equipment and other tools.
  • Cable settop boxes and WiFi routers
  • There are numerous sources of non-Chinese fiber optic cable, but many of the components for an outside plant network like fiber adapters, connectors, pre-connectorized drops, etc. will be affected.

I try not to be political in my blog – and it’s normally easy to do because broadband deployment is a topic that enjoys bipartisan support. I’ve always found in rural America that politicians from both parties support fiber and wireless network deployments because they understand that their local economy needs broadband to thrive and survive. I visited a number of rural counties in the last year where the elected officials say that lack of broadband access has become the number one issue of concern in their county.

However, I have no doubt when looking at the size and scope of these tariffs that the cost of building broadband just got more expensive. I won’t be surprised if this doesn’t kill or delay some pending construction projects, and it’s something that will have to be factored in to any future-looking business plans. I’m sure I share the sentiment of many in the industry and hope that these tariffs are temporary.

Is There a Web Video Crisis – Part II

Data CenterIn the first part of this series I looked at the three areas of the customer network – the edge network, the distribution network and the Internet backbone. I came to the conclusion that if Comcast and Verizon operate the same way as the hundreds of carriers that I work for that the fees paid by end user customers ought to be sufficient to cover the costs of those portions of the network and to ensure that the network is robust enough to cover video. It seems to me that nobody but Comcast and Verizon seems to have a need to charge for an Internet ‘fast lane’.

But those three network components are not the entire Internet network, so to be fair to Comcast and Verizon there are a few other places to look. In this blog I will consider what happens when a lot of video hits the web at the same time. Let’s see if this might be the reason Comcast needs an Internet fast lane.

There are two different ways that video traffic can be larger than normal on web. The first is when there is a major event simulcast on the web. Simulcast is when a video is sent to many locations at exactly the same time. The granddaddy of such events is the Superbowl. But there are a lot of other big events like the Olympics and the soccer World Cup. In those instances there are a whole lot of people watching the same event. Simulcast doesn’t always involve sports and one of the more recent web crashes was during the finale of True Detective on HBO Go.

There have been a few major crashes in the past during simulcast events and as often as not the problem has been at the programmer’s server which received more requests for signals than it could handle. But considering simulcast highlights another part of the Internet – the servers, switches and routers used to send, route and receive traffic over the web. These devices are the routing core of the Internet and are found today at large data centers. It certainly is possible for these devices to get overwhelmed. In the past when there have been web crashes it was mostly likely these devices and not the fiber data network that got overwhelmed by video

On a per customer basis the servers, routers and switches are the least expensive part of the Internet network. This is not to say that they are cheap, but they cost a lot less than building fiber networks. As mentioned above, the point of stress on simulcast video are the originating servers, and thus it would be incredibly cynical of Comcast to claim that they need to charge a premium price to NetFlix because they don’t have enough servers and routers to handle the traffic. Their terminating routers ought to be sufficient and ready to handle large volumes of videos as a normal course of business.

The other way that web video traffic can get big is when a lot of people are watching video and each one of them is watching something different. Today people watch what they want when they want and this is the primary way that the web handles video. But there are times when usage is greater than normal, and perhaps this is what drives the need for a fast lane.

Broadcasters like NetFlix have helped to ameliorate the affects of large video volumes by caching. For example NetFlix will put a caching server at any large headend at their own cost to cut down on the stress on the web. A NetFlix caching server will contain a copy of all of the programming that NetFlix predicts that people will most want to watch. Anybody who then watches one of these shows initiates the program from the local caching server rather than making a new web request back at the NetFlix hubs. I would have to assume that NetFlix has provided numerous caching servers to Comcast and Verizon, so this cannot be a reason to charge more for a fast lane.

But caching doesn’t always solve large demand. First, a NetFlix caching device only contains what NetFlix predicts will be popular, and if something else they host goes viral it won’t be on their caching server. But more importantly, there is a ton of video content on the web that is not going to be on these kinds of caching servers. If some video from Facebook or YouTube goes vital it is likely not to be already cached because nobody could have predicted it would go viral.

But there is a new technology that should solve the caching issue. Cisco and other smaller companies like PeerApp and Qwilt have introduced a technology called transparent caching. This technology caches content on the fly. If more than two users in a network ask to see the same content it makes a local copy of that content. Within minutes of teens loving some new YouTube video it would be cached locally and would stay in the cache until demand for it stops. This technology will drastically reduce the requests back to the originating servers at providers like NetFlix and YouTube.

My conclusion of this discussion is that I find a hard time seeing where Comcast or Verizon can claim that their routers, switches and servers are inadequate to handle the traffic from NetFlix. These are one of the cheaper components of the web on a per customer basis and they ought to have adequate resources to handle simulcasts or viral videos. Even if they don’t, the new technology of transparent caching promises to drastically reduce the web traffic associated with video since any popular content will be automatically locally cached.