The Next Generation of 911

EMRI’ve started noticing news articles talking about the next generation of 911 (NG911), so it seems the public is starting to be aware that there is a big change coming in the way that 911 works. We are in the process nationwide of migrating from the traditional 911 to a fully IP-based system that will include a lot of new features. When fully implemented, NG911 will allow interactive text messaging and for smart call routing using caller location that will consider factors such as the workload at the closest 911 center, current network conditions, and the type of call. The NG911 system will enable a data stream between callers and the 911 center so that there can be an exchange of pictures, videos (including support for American Sign Language), and other kinds of data that will enhance the ability of a 911 center to do their job such as building plans or medical information.

NG911 will be implemented in phases and many places are already experimenting with some of the new features like text messaging. But other parts of the final IP-based 911 are still under development.

NG911 is going to replace today’s circuit-switched 911 networks that carries only voice and a very limited amount of data. Today each carrier that handles voice calls must provide dedicated voice circuits between them and the various 911 centers that fall within their service area. For landlines the 911 center that any given customer contacts is predetermined based upon their telephone number.

But the number-based 911 has been having problems with some kinds of calls. There are numerous examples of where 911 was unable to locate mobile callers since they tried to use triangulation to find the location of a caller to 911. And for a number of years it’s been possible to move a VoIP phone to anywhere that has a data connection and the current 911 systems have way to identify or know the location of such callers. And identifying callers is going to get harder as we start seeing huge volumes of WiFi-based VoIP from cellphones as cellular carriers dump voice traffic onto the landline data network in the same manner they have with other data. The promise is that NG911 will be able to handle the various flavors of VoIP.

There are a lot of new standards being developed to define the operating parameters of NG911. The standards are being driven though NENA, the National Emergency Number Association. Many of these standards are now in place, but standards keep evolving as vendors try different market solutions. A lot of the new NG911 is going to be driven by the creation and use of a number of new database systems. These systems will be used to manage functions like call validation, smart routing control, and the processing of new NENA-approved call records.

The new IP-based 911 networks are being referred to as ESInets (Emergency Service IP Networks). There are managed private networks that will be used to support not only 911 but also other types of public safety communications.

The overall goal is to do a much better job responding to emergencies. Today there are far too many examples of calls to 911 that never get answered, calls that are sent to the wrong 911 center, or calls where the 911 operators can’t determine the location of the caller. Further, the new system will allow for the public to summon 911 in new ways other than through voice calls. There will be a two-way process for sending pictures and videos to the 911 center or floor plans and medical advice back to callers. When fully implemented this should be a big leap forward resulting in reduced costs due to more efficient use of our emergency resources as well as more lives saved.

Google’s Experiment with Cellular Service

Wi-FiAs I’m writing this (a week ago), Google opened up the ability to sign-up for its Project Fi phone service for a 24-hour period. Until now this has been by invitation only, limited I think by the availability of the Google Nexus phones. But they are launching the new Nexus 5X phone and so they are providing free sign-up for a 24-hour period.

The concept behind the Google phone plan is simple. They sell unlimited voice and text for $20 per month and sell data at $10 per gigabit as it’s used. The Google phone can work on WiFi networks or will use either the Sprint or T-Mobile networks when a caller is out of range of WiFi. And there is roaming available on other carriers when a customers in not within the range of any of the preferred networks.

Cellular usage is seamless for customers and Google doesn’t even tell a customer which network they are using at any given time. They have developed a SIM card that can choose between as many as 10 different carriers although today they only have deals with the two cellular carriers. The main point of the phone is that a customer doesn’t have to deal with cellular companies any longer and just deals with Google. There are no contracts and you only pay for what you use.

Google still only supports this on their own Nexus phones for now although the SIM card could be made to work in numerous other phones. Google is letting customers pay for the phones over time similar to what the other cellular carriers do.

Google is pushing the product harder in markets where it has gigabit networks. Certainly customers that live with slow or inconsistent broadband won’t want their voice calls routing first to WiFi.

The main issue I see from the product is that it is an arbitrage business plan. I define anything as arbitrage that relies on using a primary resource over which the provider has no control. Over the years a lot of my clients are very familiar with other arbitrage plans that came and went at the whim of the underlying providers. For example, there have been numerous wholesale products sold through Sprint like long distance, dial tone, and cellular plans that some of my clients used to build into a business plan, only to have Sprint eventually decide to pull the plug and stop supporting the wholesale product.

I am sure Google has tied down Sprint and T-Mobile for the purchase of wholesale voice and texting for some significant amount of time. But like with any arbitrage situation, these carriers could change their mind in the future and strand both Google and all of their customers. I’m not suggesting that will happen, but I’ve seen probably a hundred arbitrage opportunities come and go in the marketplace during my career and not one of them lasted as long as promised.

It’s been rumored that Apple is considering a similar plan. If they do, then the combined market power of both Google and Apple might make it harder for the underlying carriers to change their mind. But at the end of the day only a handful of companies own the vast majority of the cellular spectrum and they are always going to be the ones calling the shots in the industry. They will continue with wholesale products that make them money and will abandon things that don’t.

There are analysts who have opined that what Google is doing is the inevitable direction of the industry and that cellular minutes will get commoditized much in the manner as long distance in the past. But I think these analysts are being naive. AT&T and Verizon are making a lot of money selling overpriced cellular plans to people. These companies have spent a lot of money for spectrum and they know how to be good monopolists. I still laugh when I think about how households that used to spend $30 to $50 per month for a landline and long distance now spend an average of $60 per family member for cellphones. These companies have done an amazing job of selling us on the value of the cellphone.

Perhaps the analysts are right and Google, maybe with some help from Apple, will create a new paradigm where the carriers have little choice but to go along and sell bulk minutes. But I just keep thinking back to all of the past arbitrage opportunities where the buyers of the service were also told that the opportunity would be permanent – and none of them were.

Special Access Rate Investigation

FCC_New_LogoThere is an investigation going on at the FCC that is probably long overdue involving looking at special access rates. Special access rates are rates used by telephone companies to charge for TDM data circuits such as T1s.

You might think that T1s and TDM technology would be fading away, but the large telcos are still making a fortune by requiring other carriers and large businesses to interface with them using TDM circuits and then charging a lot of money for the connections. As an example, the connections between a large telco like AT&T and CLECs or long distance carriers are still likely to be comprised of DS-3s (28 T1s).

There are also still a lot of businesses that use T1s. There are still a lot of older phone systems sitting at small businesses that need a T1 interface to connect back to the phone company. And in very rural markets where there is no last mile fiber the telcos are still selling T1 data connections to businesses and delivering the paltry 1.544 Mbps of data that it can deliver.

The main thrust of the investigation are the prices being charged. In places with no competition the telcos might still charge between $400 and $700 per month for a T1 connection. And it’s not unusual for carriers to have to pay thousands of dollars per month to interface with the large carriers at a regional tandem switch.

There was a time when the prices charged for TDM circuits were somewhat cost-based, even though as somebody who did some of the cost studies behind the rates, I can tell you that every trick in the books was used to justify the rates at the highest possible cost. But in a 100% copper network there was some logic behind the charges. For example, if a business bought a T1 the phone company had to dedicate two copper pairs throughout the network for that service plus provide fairly costly electronics to supply the T1. I remember when T1s first hit the market and they were a big step forward in telco technology.

But technology has obviously moved forward and we now live in an Ethernet world. The FCC has been working on a transition of the public switched telephone network from TDM to all-IP and this investigation is part of that process.

The prices for TDM special access are all included in tariffs, and for the most part the rates have not changed in years, or even in decades. Where it used to be a big deal, for example, for a telco to send a DS3 between its offices, the bandwidth on a DS3, at 45 Mbps, is barely a blip today inside of the huge Ethernet data pipes the phone companies use to connect their locations. Even if the cost for a DS3 was justified at some point in time, when those same circuits are carried today inside much large data pipes the cost of transporting the data has dropped immensely.

It’s good that the FCC is investigating this, but to a large degree it’s their fault that the rates are so high. It’s been decades now since either the FCC or the state regulatory commissions required cost studies for TDM circuits. And without the prodding by the regulatory agencies the telcos have all let the old rates stand in place and have been happily billing them year after year. This investigation should have been done sometime soon after the Telecommunications Act of 1996, because the rise of competitive telecom companies created a boom in special access sales, all at inflated prices.

Special access rates matter a lot to small carriers. For example,special access is one of the largest expenses for any company that wants to provide voice services. It’s not unusual for a company to spend $100,000 or more per year buying special access services even if they deliver only a tiny volume of voice traffic to the world. As would be expected, the high costs adversely affect small carriers to a much greater extent than large carriers who can better fill up the pipe between them and the large telco.

For years the telcos have hidden behind the fact that these rates are in a tariff, meaning that they are not negotiable for other carriers. But at the same time, the telcos routinely drop rates significantly when selling special access circuits in a competitive market. The high special access rates apply only to those small carriers and businesses who are too small to negotiate or who do not operate in a competitive part of the network. It’s definitely time for these rates to be brought in line with today’s costs, which are a small fraction of what the telcos are charging. It would not be shocking for the FCC to determine that special access rates are 70% to 90% too high, particularly when you consider that most of the network and electronics that support them have been fully depreciated for years.

How Fast is Your Data Traffic Growing?

exponential-growth-graph-1I saw a quote recently from Jeff Finkelstein, the chief of the networks at Cox Communications, who said that the data demand on his networks was growing at 53% per year. I would hope that the managers of most large networks can cite their growth statistics.

There has been a metric in the industry that residential data usage has been doubling about every three years. This metric has roughly held true since the early dial-up days. Year after year people download more than they have the year before.

But it’s easy for us to lose sight that the bandwidth that people and businesses use for getting to the web is only a piece of the bandwidth usage on an ISP network. There are two other big uses of the networks that are growing faster than Web usage. We are seeing the first real growth in traffic from the Internet of Things, but the fastest growing contributor to web traffic right now is machine-to-machine (M2M) traffic.

M2M is when devices talk to each other. One example would be programs that automatically back up things to the cloud. PCs and cellphones now routinely send data to and from the cloud without any specific action by the user. The proliferation of storing data and using programs in the cloud has exploded the M2M usage.

The expected increase in M2M traffic is staggering as more and more things move into the cloud. In 2015 so far the average M2M traffic worldwide is about 50 terabytes per month. By 2018 that is expected to grow to over 900 terabytes per month. And while IoT traffic is relatively small right now, it is starting to grow rapidly as well.

Finkelstein says that the only way for his company to keep up with this fast growth is through the use of software defined networks (SDN). Cox uses SDN today to identify segments of traffic to route everything as efficiently as possible. He says that the company can isolate things like children’s traffic from parent’s traffic from business traffic and route each differently.

Just a few years ago there was a proliferation of peering arrangements established at companies like Cox. They created direct peering connections with companies like Google. But peering is getting more complicated and the goal for a large company is to peer with the major clouds – the Google cloud, the Amazon cloud, the Microsoft cloud etc. And that is where SDN comes into play to help route traffic as efficiently as possible to save on transport costs and to cut down on latency.

What this means for the small ISP is that the rate of growth of overall data traffic is accelerating. The 53% annual growth number would have seemed like an unbelievable number five years ago. For anybody not preparing properly the effects of that level of exponential growth can catch up to any network in a hurry. If Cox’s 53% growth is sustained they will have 5.5 times more traffic five years from now than they have today.

I don’t know how many network operators are planning ahead for that kind of growth. Certainly every network is seeing growth even if it’s not at quite the speed Cox is seeing. Traffic on rural networks is probably not growing quite as quickly as the Cox network, but it is still growing rapidly.

The major issue for most network owners will be to keep an eye on the various choke points in your network to understand where more data is going to cause you problems in the near future. Choke points can exist at many different places in the network from the backbone data pipes down to neighborhood nodes. And keeping all parts of your network  ahead of demand is going to require capital spending to upgrade electronics.

 

 

Companies Choose Sides on Surveillance Legislation

eyeballThere is a battle brewing on Capitol Hill over the future of data security and surveillance. The proposed law is called Cisa (Cybersecurity Information Sharing Act). A summary of the bill is here.

A lot of the large tech companies like Apple, Amazon, Google, Microsoft, Dell, Netflix, Oracle, Twitter, Yahoo, and Wikipedia have come out against the proposed law. But on the other side, in favor of the legislation, are a few tech companies and the large carriers such as AT&T, Verizon, Comcast, Cisco, HP, and Intel.

In a nutshell, the legislation replaces the former NSA surveillance program with a program under the Department of Homeland Security. While a significant portion of the bill is aimed at creating a national cybersecurity policy, the legislation also allows for government surveillance of phone and data records very similar to what has been collected by the NSA. Interestingly, the Department of Homeland Security is not in favor of the bill and says that it sweeps away privacy protections.

One thing is clear through many polls: American citizens don’t like the idea of being spied on by the government. It’s an issue that polls consistently across political, religious, and age differences. And so, to a large degree, the tech companies against this surveillance are voicing what they hear from their customers. And not unexpectedly, many of the companies in favor of the legislation are those that profit significantly by handling the government surveillance work.

The biggest issue the opponents see in the bill that is that it requires that data gathered anywhere in the government then be shared with multiple federal agencies. I suppose this is a way to not let only one agency like the NSA gather and hold all of the data on citizens. But nobody believes that the government is capable of protecting all of the gathered data. In a recent discussion on the floor of the Senate, Senator Ron Wyden (D-Ore.) summarized this well, “There is a saying now in the cybersecurity field, Mr. President: if you can’t protect it, don’t collect it.” If the NSA couldn’t keep things secret, then how can multiple federal agencies protect against hacking and leaks?

Certainly the recent attacks on government personnel records are a good indicator of this. I have many friends who work in the government and they tell me that government computer and software systems are typically a few generations behind the commercial world, and due to the antiquated government purchasing process their systems are likely to always be behind.

I am certainly no security expert, but I do know that I don’t like the idea of the government gathering data about everyone. And I certainly don’t trust them to keep that data safe from hacking from the outside or abuse from the inside.

The other feature of the bill that is not very attractive is that it seems to put a lot of emphasis on creating a new government bureaucracy, which is likely to be nearly worthless in actually stopping cyberterrorism. The security fight on the web is already being fought by a number of web security companies and it’s a battle that changes daily. It just seems unlikely that government bureaucrats and policies can keep up with the real world security issues that require a daily fight against new viruses and new threats.

I’ve written a few times about how one of the biggest threats to the health of the web is government surveillance. It has already driven a lot of countries to erect firewalls around their country’s data. And it is driving people, and companies like Apple, to encrypt everything. It’s extremely naïve to think that the real terrorists in the world aren’t already fully encrypted and part of the dark web. I can understand the feeling that we have to do something about security, but gathering data about every citizen in the country and then sharing that across multiple government agencies doesn’t feel like the way to do anything but make us even more vulnerable.

The Non-boom of OTT Programming

Fatty_watching_himself_on_TVI recently looked back at research I did a year ago, and at that time there was a lot of press talking about how over-the-top video offerings were going to soon flood the market, leading to a boom in cord cutters. But in looking at the OTT offerings on the market today it’s easy to see that the flood of new OTT entrants didn’t materialize.

My look backwards was prompted by an article citing the CEO of CBS who said that his network had gotten requests from Facebook, Apple, and Netflix seeking the right for both TV shows and live broadcasts. Those are certainly some powerful companies, and other than Netflix, a company one would expect to be making such requests, it might portend some new OTT offerings. Many pundits in the industry have been predicting an Apple OTT offering for a number of years to go along with the Apple TV product.

I’m a cord cutter myself and so I’m always interested in new OTT offerings. But for various reasons, mostly associated with price, I am not very interested in most of what is out there today. We subscribe to Netflix, Amazon Prime, and I’ve tried Sling TV twice. But I have not seen any compelling reason to try the other OTT offerings. The list of pay OTT content that’s available is still pretty short, as follows:

  • Showtime: $11 per month with an Apple TV device (which I don’t have).
  • HBO Now: $15 per month with an Apple TV device, and coming soon to Google Play and through Cablevision.
  • CBS All Access: $6 per month but blocks sports content like the NFL.
  • Nickelodeon Noggin: $6 per month.
  • Sling TV: $20 per month. Mix of sports and popular cable networks.
  • PlayStation Vue: Starts at $50 per month. Includes both broadcast and cable networks. This seems like an abbreviated cable line-up, but at cable TV prices.
  • Comcast Stream: $15 per month, only for non-TV devices and must have a Comcast data product. A dozen broadcast networks plus HBO and Streampix.
  • Netflix: $8 per month.
  • Amazon Prime: $99 per year. Includes free or reduced shipping on Amazon purchases and free borrowing of books and music.
  • Hulu Plus: $8 per month with commercials and $12 without commercials. Mostly network TV series.
  • Verizon Go90: Free to certain Verizon wireless customers.

So why hasn’t there been an explosion of other OTT offerings? I think there are several reasons:

  • The standalone networks like CBS and Nickelodeon are basically market tests to see if there is any interest from the public to buy one channel at a time. These channels are being sold at a premium price at $6 per month and it’s hard to think that many households are willing to pay that much for one channel. Most networks want to be very cautious about moving their line-up online and are probably watching these trials closely. One doesn’t have to multiply out the $6 rate very far to see that any household trying to put together a line-up one channel at a time is going to quickly spend more than a traditional expanded basic cable line-up for a lot fewer channels.
  • HBO and Showtime have nothing to lose. The Game of Thrones has been reported as the most pirated show ever and so HBO is probably going to snag some of the cord cutters who have been pirating the show. The prices for these networks are just about the same as what you’d pay for them as part of a cable subscription. But there aren’t many other premium networks out there that can sell this way.
  • One has to think that the major hurdle to anybody putting together a good OTT line-up is getting the programmers to sell them the channels they want at a decent price. The programmers don’t have a major incentive today to help OTT programmers steal away traditional cable subscriptions. Whereas somebody like Sling TV might buy a few channels from a given programmer, that programmer makes more money when cable companies buy their whole lineup. So it’s likely that the programmers are making this hard and expensive for OTT companies. I’ve not seen any rumors about what companies like Sling TV are paying for content, but Sling isn’t like most OTT companies in that it is owned by Dish Networks who is already buying a huge pile of programming. It’s got to be harder for somebody else to put together the same line-up. The dynamics of this might change someday if there a true bleeding of traditional cable customers fleeing cable companies. But for now cord-cutting is only a trickle and most of these networks are still expanding like crazy overseas to make up for any US losses.

What if Nobody Wants to Sell Video?

television-sony-en-casa-de-mis-padresSome of the largest cable companies in the country have begun to de-emphasize cable TV as a product and it makes me wonder if smaller companies should consider the same strategy. It’s been clear to everybody in the industry that margins on cable have dropped, so the question that every cable provider should ask is how hard should you work to maintain cable customers or introduce any new innovations in your cable products?

The largest company that is downplaying cable TV is Cable ONE. Earlier this year Cable ONE’s CEO James Dolan told investors that cable had accounted for 64% of his profits in 2005, but by 2018 he expects that to drop to under 30 percent. Like many other cable companies, the lost margins on cable have been replaced by sales of broadband products.

Cable ONE has gone farther than most cable companies in de-emphasizing cable. For example, they and Suddenlink decided to drop the Viacom suite of cable networks when the programmer asked for a giant rate increase last year. This decision has cost these companies cable subscribers, and Cable One lost over 100,000 cable customers in the year after the decision, but the companies see this as a good long-term strategy.

If you are a small ISP and offer cable then your situation has to be a lot direr than Cable ONE’s. I have one small client who dropped their cable offering altogether earlier this year and they were surprised to find out how positively it affected them. They went from having a room full of busy customer service reps to having almost no inbound calls. It turns out that cable drove almost all of the inquiries and complaints to the company.

This tells me that it’s likely that offering cable is costing a small company a lot more than they realize. By the time you factor in the true amount of customer service time and truck rolls that are associated with the cable product it’s very likely that for small companies cable is completely under water.

The cable companies still have one major advantage that gives them a lot of flexibility. In the majority of the markets in the US the cable companies have no real competition with their data products and they have captured the lion’s share of the market. The latest statistics I’ve seen show that less than 10% of the homes in the country have access to fiber, and a lot of that is Verizon FiOS which is no longer expanding. In most markets the cable companies are still competing against DSL – a battle they have largely won.

For a while the telcos were rapidly expanding broadband products based upon paired-copper DSL, like AT&T U-verse, and were capturing a lot of data customers. But a lot of homes are starting to find that a data pipe that delivers around 40 Mbps of data, and which must be shared between cable and data products, is not fast enough for them. This might be the primary reason that AT&T bought DirecTV, to take pressure off their huge embedded base of U-verse customers by moving cable back to the satellites.

There is a lot of press about the growth in fiber-to-the-home. CenturyLink says they will pass 700,000 homes with fiber by the end of the year. AT&T is announcing new markets almost weekly for their new fiber product. And Google is steadily but slowly building fiber to new cities. But even if all of this fiber activity raises the national fiber passings to 20% of homes the cable companies will still be in the driver’s seat in most markets.

The larger cable companies are being proactive in order to preserve their large market broadband penetration rates. They have almost all announced that they are embracing DOCSIS 3.1 and will be significantly increasing data speeds in markets ahead of any fiber builds. Until now fiber roll-outs have had great success when entering markets where they are selling gigabit fiber against a 15 – 30 Mbps cable product. But fiber’s success is not going to be so automatic if cable companies can counter gigabit fiber with a lower-priced 250 Mbps or faster data product.

To come back around to my original point, it’s clear that data is becoming everything for cable companies. Analysts have been wondering for a few years how the large cable packages might eventually unravel. There has been a lot of speculation that cord-cutters and OTT programming will chip away at the business. But the death of the traditional cable packages might instead come when the cable companies all stop caring about cable TV. At that point they will have regained the balance of power against the programmers.

A Forever Fight Against Municipal Competition?

Seattle-SkylineThe appeals of the FCC’s attempt to overturn state laws that preclude municipalities from building broadband networks is working its way through the courts. Both the states of Tennessee and North Carolina have sued the FCC to stop them from overturning existing telecom laws.

It’s hard to say which way the courts will rule on the issue. The states are painting this as a states’ rights issue. In a recent filing in the case, Tennessee said that states have an, “inviolable right to self-governance . . . Far from being a simple matter of preemption, as the FCC claims, this intervention between the State and its subordinate entities is a manifest infringement on State sovereignty,”

Meanwhile, the FCC is following one of the basic responsibilities that it was tasked with by Congress. Section 706 of the Telecommunications Act of 1996 directed the FCC to take actions to remove barriers to broadband investment. I remember when the Act came out that there was a lot of discussion of how this would allow municipalities everywhere into the broadband business. Even then there were numerous barriers to municipalities becoming telephone companies and my peers and I at the time read this language to mean that the FCC would do precisely what they were told to do – which was to remove barriers. It certainly took the FCC a long time to tackle the issue.

Perhaps in the long run it doesn’t really matter what the courts say. In the two cases being appealed, the FCC ruled against specific laws only in North Carolina and Tennessee that prohibited certain actions by municipalities in those states. Even should Chattanooga and Wilson, NC win those cases the victory would apply only to those specific laws in those states.

There would be nothing stopping the legislators in those same states from trying to pass legislation that would put different blocks on municipal competition. Over the years this tactic has been tried as states have tried to overturn federal abortion laws, and more recently in states’ fights against gay marriage. Note that I am not equating the fight for municipal broadband to those hot button topics, but rather pointing out that the same legislative tactics are available to the states that don’t like the FCC ruling. They are free to try to pass different laws to chip away at the FCC until they find something that sticks.

In my mind the FCC ruling might well provide some relief for both Chattanooga and Wilson, but one has to ask if it is going to provide much help to other cities. The cost of fighting these laws has to be steep for those two cities, and one would think that there are not a lot of other cities ready to fight this hard to overturn a broadband prohibition.

I might be wrong about this and there might be dozens of cities lining up awaiting the court decisions in these cases. But realistically, the cost of the expensive court fights needed to challenge existing telecom laws is in itself a big barrier to entry for cities and most of them are probably not willing to tackle the issue.

What is most interesting about this whole fight is that there are not a huge number of cities wanting to become ISPs. I’ve seen dozens of RFPs this year from cities wanting fiber and the majority of those RFPs are seeking a commercial provider to bring broadband to the cities. For the most part cities only end up getting into the broadband business when they don’t see any alternative.

It ought to be clear to all legislators by now that just about every city that doesn’t have a fiber network wants one. Cities without broadband can see themselves slipping against cities who have been lucky enough to get it. Affordable broadband brings a lot of things to cities such as jobs, small business growth, the ability of citizens to telecommute, increased property values, etc.

But the telecom lobby is one of the more powerful lobbies in the country. The large telcos and cable companies contribute to politicians the whole way down to the local government level, and that has paid off for them in many ways. In a lot of states the legislation that is blocking municipal competition was written by the large ISPs like AT&T. And I suspect the large ISPs are willing to keep writing more legislation if that will keep away competition.

The Ongoing Fight Against Network Neutrality

Network_neutrality_poster_symbolThere was such a big ruckus over the net neutrality battle that it’s easy to think that the fight against it is over and that net neutrality is now the rule of the land. But I see news on a regular basis that indicates that the fight is not over.

First, the lawsuit filed by USTelecom is still being fought in court. A few months ago there were comments filed in that case by USTelecom, AT&T, CenturyLink, the National Cable and Telecommunications Association (NCTA), the Wireless Association (CTIA), the Wireless Internet Service Providers Association (WISPA), and the American Cable Association (ACA), all of which argued that the FCC had exceeded its authority when it adopted the net neutrality rules. AT&T subsequently dropped this suit as part of the agreement to buy DirecTV,

This is quite a diverse group and they don’t all share identical concerns about the net neutrality rules, but together these trade groups represent both the smallest and the largest telcos, cable companies, cellular providers, and ISPs in the country, all of whom would like to see net neutrality overturned.

And there is no surety that the court will uphold the net neutrality order. I’ve read legal opinions on both sides of the issue that paint a pretty good story about why the FCC ought to be upheld or overturned. Most of these arguments revolve around whether the FCC had the authority to act as they did – with obviously very different opinions on the issue.

And then there are the politicians. The politicians have gotten somewhat quiet on the issue since it has been in the court. But overturning net neutrality is still part of the Republican Party platform and one would expect this issue to come up every time Congress looks at funding the FCC. The House Energy and Commerce Committee will be ” taking a closer look at how the Commission’s net neutrality rules impact our fragile economy, as well as what can be done to foster continued deployment of broadband networks,” in the words of its chairman Rep Greg Walden (R-Ore).  And very recently, Jeb Bush came out against neutrality in one of his stump speeches.

I can understand why carriers would be against some aspects of net neutrality since it puts limitations on the things they can do to make money. But I’ve never understood why any politician would take a strong stance against it. When this is explained to people in the simplest terms – that net neutrality basically says that your ISP can’t make deals that would impede your ability to use the Internet freely – then most people think this is a good idea. I certainly understand that politicians are often beholden to the large corporations that fund them. But one would think that on a topic that is this popular with the general public that politicians would find backdoor ways to fight against something like net neutrality rather than being staunchly and publicly against it.

I’m even a little surprised that the industry is still fighting this battle as hard as they are. The one thing Wall Street hates is uncertainty, and if net neutrality is overturned we would return to a period of major regulatory uncertainty. Wall Street seems to have favored the industry since net neutrality was passed. For example, until the recent market correction the large cable companies had seen a major surge in stock prices, due at least in part to the fact that net neutrality has brought regulatory stability to the market.

There were dire predictions before net neutrality was passed that it would kill capital investments in the industry. And yet we see companies like CenturyLink pouring billions into expanding their fiber networks. And AT&T didn’t massively cut back on capital spending as they had threatened during the net neutrality debate.

As someone who partially makes my living on helping companies keep up with regulations, it seems that net neutrality hasn’t made any drastic changes so far in the way that companies do business. I find it interesting that the WISPA group is so against net neutrality, because I see their member companies expanding like crazy in rural areas and I can’t imagine that any of them have seen and drastic changes due to regulation due to net neutrality.

US Telcos Indifferent to G.Fast

Speed_Street_SignG.Fast is a new technology that can deliver a large swath of broadband over copper wires for a short distance. The technology uses some of the very high frequencies that can travel over copper, much in the same way that DSL does for lower frequencies.

The International Telecommunication Union (ITU) just approved the final standard for the technology with the G.9701 standard for “Fast Access to Subscriber Terminals.” Several vendors including Alcatel-Lucent and Huawei have been producing and testing units in various field trials.

British Telecom has done a number of these tests. The largest such test was started in August for 2,000 customers in Huntingdon, Cambridgeshire. During the trial they are offering customers speeds of 330 Mbps, but they expect at the end of the trial to be able to raise this to about 500 Mbps.

The technology involves building fiber along streets and then using the existing copper drops to bring the bandwidth into the home. This is the most affordable kind of fiber construction because a telco can overlash fiber onto its existing copper wires on the poles. That means very little make-ready work, no permits needed, and no impediments to quick construction. This kind of fiber construction can literally be done at half of the cost faced by other fiber overbuilders.

British Telecom has done a number of trials across the country. Alcatel-Lucent has also done trials with Telkom Austria. But for the most part American telcos have shown no interest in the technology. The only real trial here that I’ve read about is a trial with CenturyLink in Las Vegas.

And I frankly don’t understand the reluctance. G.Fast is a halfway solution on the way to a full fiber deployment. As cable companies and overbuilders like Google are stepping up deployment of gigabit speeds, either through fiber or through fast cable modems using DOCSIS 3.1, the telcos have been announcing fiber builds to remain competitive. AT&T has announced gigabit fiber builds in more than twenty markets. CenturyLink says it will be passing 700,000 homes with fiber in 2016.

So why wouldn’t an American telo seriously consider G.Fast? With capabilities up to 500 Mbps in real-world applications it gives them a product that can compete well with other fast technologies. And by overlashing the fiber to deploy G.Fast the telco will have tackled one of the major costs of building an FTTP network, by getting the fiber deep into the network. And with G.Fast a telco can avoid the expensive fiber drops and electronics which are the most expensive part of a FTTP network for them.

I could envision somebody like CenturyLink building fiber to the more lucrative parts of town while deploying G.Fast to older copper neighborhoods. This would give them a far greater fast broadband coverage, making it easier and more cost effective to advertise their broadband.

But it seems like most of the US telcos just want out of the copper business. And so, rather than take this as an opportunity to milk another decade out of their copper networks before finally building fiber, they seem prepared to cede even more broadband customers to the cable companies. That has me scratching my head. The cable companies have clearly accepted that their entire future is as ISPs and that data is the only real product that will matter in the future. It just seems that the large telcos have not quite yet come to this same conclusion.