Our Aging Fiber Infrastructure

One thing that I rarely hear talked about is how many of our long-haul fiber networks are aging. The fiber routes that connect our largest cities were mostly built in the 1990s in a very different bandwidth environment. I have a number of clients that rely on long-haul fiber routes and the stories they tell me scare me about our future ability to move bandwidth where it’s needed.

In order to understand the problems of the long-haul networks it’s important to look back at how these fiber routes were built. Many were built by the big telcos. I can remember the ads from AT&T thirty years ago bragging how they had built the first coast-to-coast fiber network. A lot of other fiber networks were built by competitive fiber providers like MCI and Qwest, which saw an opportunity for competing against the pricing of the big telco monopolies.

A lot of the original fibers built on intercity routes were small by today’s standards. The original networks were built to carry voice and much smaller volumes of data than today and many of the fibers contain only 48 pairs of fiber.

To a large degree the big intercity fiber routes follow the same physical paths, either following interstate highways, but to an even greater extent following the railroad tracks that go between markets. Most companies that move big amounts of data want route diversity to protect against fiber cuts or disasters, yet a significant percentage of the routes between many cities are located next to fibers of rival carriers.

It’s also important to understand how the money works in these routes. The owners of the large fibers have found it to be lucrative to lease pairs of fiber to other carriers on long-term leases called IRUs (indefeasible rights to use). It’s not unusual to be able to shop for a broadband connection between primary and secondary markets, say Philadelphia and Harrisburg, and find a half-dozen different carriers. But deeper examination often shows they all share leased pairs in the same fiber sheath.

Our long-haul fiber network infrastructure is physically aging and I’ve seen a lot of evidence of network failures. There are a number of reasons for these failures. First, the quality of fiber glass today has improved by several magnitudes over glass that was made in the 1980s and 1990s. Some fiber routes are starting to show signs of cloudiness from age which kills a given fiber pair. Probably even more significant is the fact that fiber installation techniques have improved over the years. We’ve learned that if a fiber cable is stretched or stressed during installation that microscopic cracks can be formed that slowly spread over time until a fiber becomes unusable. And finally, we are seeing the expected wear and tear on networks. Poles get knocked down by weather or accidents. Contractors occasionally cut buried fibers. Every time a long-haul fiber is cut it loses a little efficiency, and over time splices can add up to become problems.

Probably the parts of the network that are in the worst shape are the electronics. It’s an expensive proposition to upgrade the bandwidth on a long-haul fiber network because that means not only changing lasers at the end points of a fiber, but at all of the repeater huts along a fiber route. Unless a fiber route is completely utilized the companies operating these routes don’t want to spend the capital dollars needed to improve bandwidth. And so they keep operating old electronics that are often many years past their expected functional lives.

Construction of new long-haul fiber networks is incredibly expensive and it’s rare to hear of any major initiative to build fiber on the big established intercity routes. Interestingly, the fiber to smaller markets is in much better shape than the fiber between NFL cities. These secondary fiber routes were often built by groups like consortiums of independent telephone companies. There were also some significant new fiber routes built using the stimulus funding in 2008.

Today a big percentage of the old intercity fiber network is owned by AT&T, Verizon and CenturyLink. They built a lot of the original network but over the years have also gobbled up many of the other companies that built fiber – and are still doing so, like with Verizon’s purchase last year of XO and CenturyLink’s purchase of Level3. I know a lot of my clients worry every time one of these mergers happens because it removes another of a small handful of actual fiber owners from the market. They are fearful that we are going to go back to the old days of monopoly pricing and poor response to service issues – the two issues that prompted most of the construction of competitive fiber routes in the first place.

A lot of the infrastructure of all types in this country is aging. Sadly, I think we need to put a lot of our long-haul fiber backbone network into the aging category.

The Future of OTT

Level3 Just released their third annual report titled OTT Video Services, where they asked a wide array of industry experts about the future of OTT. The report posed a variety of questions about the OTT industry to 486 ‘media industry professionals,’ who were 70% from the US with the rest scattered in the rest of the world. These kind of exercises are not surveys and you can’t attach any statistical significance to the results. But since the respondents are in the industry I don’t know if there is any better way to understand where the industry thinks OTT is headed.

The most interesting finding (and the one that spawned a few headlines) is that 70% of the respondents think that OTT viewership will bypass traditional television viewership no later than 2022. That is an amazing prediction considering the huge difference today between TV and OTT viewing. While this year it’s expected that about two-thirds of US homes will watch at least one OTT broadcast per month, total OTT usage this year is expected to deliver only about 20% of the total hours spent by adults watching video content.

I can understand why Level3 would sponsor this report each year. The bandwidth required to support an OTT industry that grows from 20% of all of the hours spent watching video up to 50% is going to stress networks everywhere. About a quarter of respondents thought that OTT content would grow year-over-year as much as 25%, with almost half of the respondents thinking that growth rate would be between 30% and 50% per year.

This growth represents huge bandwidth growth on the backbone networks that Level3 operates as well as on all of the local networks that ISPs use to support residential customers. If you think your broadband slows down now in the evening, wait just a few years where there will be a lot more video on your local network.

The experts did foresee some major challenges for the OTT industry. Their biggest concern was the ability of local ISP networks to deliver a high-quality signal to customers. This concern was partially due to a concern that customers would not have enough bandwidth, but also represented concerns about the backbone networks and the interface between OTT providers and ISPs. It was disagreements between OTT players and the ISPs that prompted the last FCC to get serious about network neutrality. And since it looks like network neutrality will be scrapped that concern is back on the burner.

They are also concerned that the OTT industry might try to follow the path of traditional TV and begin inserting too many ads. The experts see ads as one of the major factors today driving people from traditional programming to OTT programming.

Another concern of the OTT industry is the ability of OTT companies to acquire desired programming. There are still some popular cable networks that none of the OTT providers have been able to purchase. There is particular concern about the ability to acquire regional sports networks, something that is a major draw for a significant proportion of customers. And there is concern about acquiring local network feeds and today the few OTT providers largely show content from a few major urban markets.

In looking towards the future, there are a number of OTT providers keeping an eye on acquiring virtual reality content, although none of the OTT services carries such content yet today. Of a higher priority to most OTT providers was the ability to beef up their networks in order to support both higher frame rates (HFR) and high-dynamic ranges (HDR) and most providers are working towards supporting both options. These technologies can improve delivery of sports content today and will situate OTT providers to offer VR content in the future.

There is also a lot of interest in OTT providers to be able to carry more live events other than sports content. They know that there is high customer demand for watching live events like the Emmys and other award shows, live concerts and other live content.

There is also a lot of interest from OTT providers that carry live network feeds (traditional cable channels shown linearly) to also be able to offer a library of video-on-demand content, in the same manner as Netflix. I’ve been a subscriber to Sling TV for a while and some of their network now offers a lot of VOD content on the service.

It’s going to be an interesting industry to watch. There are around 100 OTT services available in the US today, but only about half a dozen of them have any significant number of customers. I note that even though industry insiders foresee huge growth for the sector, that’s only going to happen if the OTT providers can find a way to offer what people want to watch.

Metropolitan ISPs

Seattle-SkylineI spend most of my time working with rural ISPs. Even my clients that work in larger cities tend to provide service to residential customers and to small and medium businesses. But there is a very competitive market for larger businesses and for businesses that operate in multiple markets.

My clients often run into this when they realize that they are unable to sell broadband to a local chain restaurant, convenience store, bank or other large nationwide or regional business. I recently poked around to see who the carriers are that are selling to metropolitan or nationwide businesses. Some of those on the list will surprise you with their success and there are carriers on the list that you’ve probably not heard of.

Since most of the ISPs in this category don’t report business revenues separate from other revenues it’s difficult to rank these companies by revenue.  Further, some of the companies on this list are almost entirely retail ISPs while others offer wholesale connections to other carriers. Probably the easiest way to compare these carriers is by looking at the number of buildings they claim to have lit with fiber. Of course, even that is not a very reliable way to compare them since there is no standard definition of what constitutes a lit building. But generally these counts are supposed to represent locations with either one very large customer, like a hospital, or else buildings with multiple business tenants. Some of these companies are also count locations like data centers or large metropolitan cell towers.

Here are the carriers that claim to provide fiber to more than 5,000 business buildings:

  • Time Warner 75,000
  • Level3 30,000
  • Cox 28,000
  • AT&T 20,000
  • Zayo 16,700
  • Charter 13,800
  • Fibertech 10,400
  • Verizon 10,000
  • Lightower   8,500
  • Sunesys   7,200
  • Cablevision   7,000
  • Frontier   6,300

Missing from this list is Comcast. I can’t find any references to the number of lit buildings they are in. They are a major provider of business broadband and reported just over 1 million business customers along with $1.3 billion in revenue for their Business Services division. Also missing from the list is CenturyLink who doesn’t seem to claim lit buildings anywhere that I could find. CenturyLink claims to be selling to more than 100,000 businesses on fiber at the end of 2015.

On the list though are both Charter and Time Warner Cable that just merged, which would put them in 88,800 buildings. Fibertech and Lightower merged in 2015 giving them a total of 18,900 lit buildings.

Level3 and Zayo provide both retail and wholesale fiber products, meaning that they will sell connections into the lit buildings directly to businesses or else to other carriers, and they derive a large portion of their revenues from wholesale sales.

The first thing that surprised me about this list is that the cable companies appear to be in a lot more buildings than AT&T and Verizon. There are two possible explanations for this. One is that each group of companies is counting lit buildings in a different way. For example, the cable companies might be counting buildings like schools while the telcos might only be counting larger multi-tenant buildings. But it’s also possible that the telcos have a strategy of only building fiber to the largest buildings in each market while the cable companies will build routinely to smaller buildings. It does raise the question if this is a reasonable side-by-side comparison.

I would also note that some of these companies are growing rapidly and that most of these counts came from 2014 or 2015. Vertical Systems Group (a research and consulting firm that tracks the metro Ethernet market) says that the percentage of the metropolitan businesses connected to fiber grew from 42% in 2014 to 46% in 2015.

The Ongoing Special Access Battle

eyeballThere is a big battle going on at the FCC over special access rates and the FCC has promised to finally weigh in later this month. Special access in the industry refers to the sale of dedicated TDM circuits like T1s and DS3s. To some extent this is a fight between the big RBOCs and all of the CLECs and other carriers that need special access circuits to reach customers. But this battle actually affects all of us because the businesses you deal with such as banks or other entities (like your local government) are big users of these products.

There are several different issues being contested in the special access investigation currently underway at the FCC. But most of the battle is about the price of special access as well as the unfair practices of the big special access providers like Verizon and AT&T. This whole fight comes down to money and special access is still a major source of revenue for the big telcos. To show you how big, USTelecom, the lobbying arm for the big telcos sponsored a study that said that regulating special access rates could cost 43,560 jobs and $3.4 billion in economic growth over five years.

Special access rates are very high. They were set in the days when TDM circuits were the state of the art technology. We all remember when it could easily cost you $700 per month or more to get a T1 to a business – a lot of money to pay for a symmetrical 1.5 Mbps connection. Last year I looked at the data bill for an urban county and they were still buying millions of dollars of these special access circuits – at nearly full cost. I estimated they could cut their bills by 50% to 60% by shifting to an alternate provider.

But therein lies the big rub with special access. Once you get outside of the main business district in most cities the RBOCs are still the only ones that have wires connected to most buildings. And so as absurd as it sounds, for a huge percentage of geography in the US special access is still the only way to provide dedicated transport to a business (meaning their data doesn’t get commingled with some other business). And this means that special access is what CLECs and other carriers must buy from the telcos if they need access to a given business.

The RBOCs make deals with the largest carriers – Level3, XO Communications and Sprint. These carriers can get a substantial discount on buying special access due to the volumes they purchase. That doesn’t sound unfair until you look into all of the strings that come attached with the volume discounts. For example, Level3 has complained in the FCC docket that there are markets where Verizon requires them to buy 90% of their connections from Verizon – or else not get the discounts. All of the carriers complain about termination charges. Should a customer of one of these carriers move or go out of business the RBOCs still demand that the carriers pay the cost of the circuit for the length of the involved contracts – lengths upon which the RBOCs largely dictate.

And of course, if you are not a huge carrier you don’t get the bulk discount. A business who wants to buy special access on their own (such a bank that wants to connect to multiple ATMs must pay the full tariff rates.

Another part of the battle at the FCC is that the carriers want more access to the fiber and Ethernet services owned by the RBOCs. But the telcos are very judicious about deciding which of their facilities are open to competitors and which aren’t.

Of real concern in the carrier world is the announcement that Verizon wants to buy the fiber assets of XO Communications. Today most businesses and smaller carriers will buy from the big carriers like XO, Level3 or Zayo because it’s cheaper, there’s less paperwork and these companies are far easier to work with than the telcos.

By buying XO, Verizon will be eliminating one of their largest and more vocal opponents. They also will be folding a lot more fiber into the Verizon networks. The fear is that Verizon will either convert the XO network to special access, meaning the price will go up, or they will consider it as fiber needed for Verizon’s own needs only and withdraw the networks from the open market.

In any market where there is a limited amount of fiber built to businesses, removing one of the biggest fiber owners like XO is going to be a big blow to many of those who use it. Many of them are either going to see rate increases or else have to find alternate transport elsewhere.

There is no telling what the FCC will order in this docket. But the position taken by the telcos is typical and a bit scary. They claim that there is vibrant competition available in the marketplace and they accuse the CLECs and carriers of whining to get cheaper prices. They love their monopoly power in most markets and aren’t going to give it up easily.

What’s the Truth About Netflix?

Polk County SignClearly a lot of customers around the country are having trouble with NetFlix. The latest round of finger pointing is going on between Verizon, Netflix and some intermediate transport providers.

Netflix uses adaptive streaming for its standard quality video and this only requires about 2 Mbps at the customer end to get the quality that Netflix intends for the video. HD videos require more bandwidth, but customers are complaining about standard video. A Netflix download requires a burst of data up front so that the movie can load ahead of the customer. But after that it stays steady at the 2 Mbps rate and the download even pauses when the customer pauses. It’s getting hard to find an urban ISP that doesn’t deliver at least that much speed, so one would assume that any customer who subscribes to at least 2 Mbps data speeds should not to be having trouble watching Netflix.

But they are. On their blog Verizon talks about a customer who has a 75 Mbps product and who was not getting good Netflix quality. On that blog Verizon says that it checked every bit of its own network for possible choke points and found none. For those not familiar with how networks operate, a choke point is any place in a network where the amount of data traffic passing through could be larger than the capacity at the choke point. In most networks there are generally several potential chokepoints between a customer and the outside world. In this blog Verizon swears that there is nothing in its network for this particular customer that would cause the slowdown. They claim that the only thing running slow is Netflix.

This is not to say that there are no overloaded chokepoints anywhere in Verizon networks. It’s a big company and with the growth of demand for data they are bound to have choke points pop up – every network does. But one would think that their fiber FiOS network would have few chokepoints and so it’s fairly easy to believe Verizon in this instance.

Verizon goes on to say that the problem with this Los Angeles customer is either Netflix or the transit providers who are carrying Netflix traffic to Verizon. Verizon is not the only one who thinks it’s the transit interface between the networks. Here is a long article from Peter Sevcik of NetForecast Inc. that shows what happened to the Netflix traffic at numerous carriers both before and after Netflix started peering directly with Comcast. This data shows that traffic got better for everybody else immediately upon the Comcast transition, which certainly indicates that the problem is somewhere in the transit between Netflix and the ISPs.

Verizon says the problem is that Netflix, or the intermediate carriers don’t want to buy enough bandwidth to eliminate chokepoints. Sounds like a reasonable explanation for the troubles, right? But then Dave Schaffer, the CEO of Cogent came forward and pointed the finger back at Verizon. He says that the problem is indeed in the interface between Cogent and Verizon. But Schaffer claims this is Verizon’s fault since they won’t turn up additional ports to relieve the traffic pressure.

So now we are back to square one. The problem is clearly in the interface between Verizon and carriers like Cogent. But they are blaming each other publicly. And none of us outside of this squabble are going to know the truth. Very likely this is a tug-of-war over money, and that would fall in line with complaints made by Level3, who says that Verizon is holding traffic hostage to extract more money from the transit carriers.

The FCC is looking into this and it will be interesting to see what they find. It wouldn’t be surprising if there is a little blame on both sides, which is often the case when network issues devolve into money issues. Carriers don’t always act altruistically and sometimes these kinds of fights almost seem personal at the higher levels of the respective companies. The shame from a network perspective is that a handful of good technicians could solve this problem in a few hours. But in this case even the technicians at Verizon and the transit carriers might not know the truth about the situation.

Why We Need Network Neutrality

Network_neutrality_poster_symbolWhile the FCC has been making noise about finding a way to beef up net neutrality, the fact is that the courts have gutted it and ISPs are more or less free today to do whatever they want. In March, Barbara van Schewick, a Stanford professor had several ex parte meetings with the FCC and left behind a great memo describing the current dilemma with trying to rein in network neutrality violations.

In this memo she describes some examples of bad behavior by US and British ISPs. While she highlights some well-known cases of overt discrimination by ISPs, she believes the fact that the FCC has actively intervened over the last decade in such cases has held the ISPs at bay. But now, unless the FCC can find some way to put the genie back into the bottle there are likely to be many more examples of ISPs discriminating against some portions of web traffic.

Certainly ISPs have gotten a lot bolder lately. Comcast essentially held Level3 and Netflix hostage by degrading their product to point of barely working in order to extract payments out of them. And one can now imagine AT&T and Verizon doing the same thing to Netflix and all of the ISPs then turning to other big content providers like Amazon and Facebook and demanding the same kind of payments. It seems that we have now entered a period where it’s a pay-for-play network since the FCC did nothing about the issue.

The US is not the only place in the world that has this issue. We don’t have to look at the more extreme places like China to see how this might work here. Net neutrality violations are pretty common in Europe today. A report in 2012 estimated that one on five users there was affected by ISP blocking. The things that have been blocked in Europe are across the board and include not only streaming services, but voice services like Skype, peer-to-peer networks, Amazon cloud services, gaming, alternate email services and instant messaging.

If we don’t find a way to get net neutrality under control the Internet is going to become like the wild-west. ISPs will slow down large bandwidth users that won’t pay them. They will block anybody who is doing too good of a job of competing against them. The public will be the ones who suffer from this, but a lot of the time they won’t even know it’s being done to them.

I don’t know anybody who thinks the FCC has the courage to take the bold steps needed to fix this. The new Chairman talks all the right talk, but there has been zero action against Comcast for what they did to Netflix. I imagine that the ISPs are still taking it a little easy because they don’t want to force the FCC to act. But the FCC’s threats of coming down on violators are going to sound hollow as each day passes and nothing happens.

Professor van Schewick points out that absent strong rules from the FCC that there is no other way to police network neutrality. Some have argued that antitrust laws can be used against violators. But in the memo she demonstrates that this is not the case and that antitrust law is virtually worthless as a tool to curb ISP abuses.

It’s not just the big ISPs we have to worry about. There are a lot of smaller ISPs in the country in the form of telcos, cable companies, municipalities and WISPs. It’s not hard to picture some of the more zealous of these companies blocking things for political or religious reasons. One might assume that the market would act to stop such behavior, but in rural America there are a whole lot of people who only have one choice of ISP.

I hope that things don’t get as bad as I fear they might and that mostly common sense will rule. But as ISPs violate the no-longer functional net neutrality rules and nothing happens they are going to get bolder and bolder over time.