Consolidation of Telecom Vendors

It looks like we might be entering a new round of consolidation of telecom vendors. Within the last year there have been the following announced consolidation among vendors:

  • Cisco is paying $5.5 billion for Broadsoft, a market leader in cloud services and software for applications like call centers.
  • ADTRAN purchased CommScope, a maker of EPON fiber equipment that is also DOCSIS compliant to work with cable networks.
  • Broadcom is paying $5.9 billion to buy Brocade Communications, a market leader in data storage devices as well as a range of telecom equipment.
  • Arris is buying Ruckus Wireless as part of a spinoff from the Brocade acquisition. Arris has a goal to be the provider of wireless equipment for the large cable TV companies.

While none of these acquisitions will cause any immediate impact on small ISPs, I’ve been seeing analysts predict that there is a lot of consolidation coming in the telecom vendor space. I think most of my clients were impacted to some degree by the last wave of vendor consolidation back around 2000. And that wave of consolidation impacted a lot of ISPs.

There are a number of reasons why the industry might be ripe for a round of mergers and acquisitions:

  • One important technology trend is the move by a lot of the largest ISPs, cable companies and wireless carriers to software defined networking. This means putting the brains to technology into centralized data centers which allows cheaper and simpler electronics at the edge. The advantages of SDN are huge for these big companies. For example, a wireless company could update the software in thousands of cell sites simultaneously instead having to make upgrades at each site. But SDN means less costly and complicated gear.
  • The biggest buyers of electronics are starting to make their own gear. For example, the operators of large data centers like Facebook are working together under the Open Compute Project to create cheap routers and switches for their data centers, which is tanking Cisco’s switch business. In another example, Comcast has designed its own settop box.
  • The big telcos have made it clear that they are going to be backing out of the copper business. In doing so they are going to drastically cut back on the purchase of gear used in the last mile network. This hurts the vendors that supply much of the electronics for the smaller telcos and ISPs.
  • I think we will be seeing an overall shift over the next few decades of more customers being served by cable TV and wireless networks. Spending on electronics in those markets will benefit few small ISPs.
  • There are not a lot of vendors left in the industry today, and so every merger means a little less competition. Just consider FTTH equipment. Fifteen years ago there was more than a dozen vendors working in this space, but over time that has cut in half.

There are a number of reasons why these trends could foretell future trouble for smaller ISPs, possibly within the next decade:

  • Smaller ISPs have always relied on bigger telcos to pave the way in developing new technology and electronics. But if the trend is towards SDN and towards large vendors designing their own gear then this will no longer be the case. Consider FTTP technology. If companies like Verizon and AT&T shift towards software defined networking and electronics developed through collaboration there will be less development done with non-SDN technology. One might hope that the smaller companies could ride the coattails of the big telcos in an SDN environment – but as each large telco develops their own proprietary software to control SDN networks that is likely to not be practical.
  • Small ISPS also rely on larger vendors to buy enough volume of electronics to hold down prices. But as the big companies buy fewer standard electronics the rest of us use you can expect either big price increases or, worse yet, no vendors willing to serve the smaller carrier market. It’s not hard to envision smaller ISPs reduced to competing in the grey market for used and reconditioned gear – something some of my clients already do who are operating ten-year old FTTP networks.

I don’t want to sound like to voice of gloom and I expect that somebody will step into voids created by these trends. But that’s liable to mean smaller ISPs will end up relying on foreign vendors that will not come with the same kinds of prices, reliability or service the industry is used to today.

The Open Compute Project

The InternetI wrote recently about how a lot of hardware is now proprietary and that the largest buyers of network gear are designing and building their own equipment and bypassing the normal supply chains. My worry about this trend is that all of the small buyers of such equipment are getting left behind and it’s not hard to foresee a day when small carriers won’t be able to find affordable network routers and other similar equipment.

Today I want to look one layer deeper into that premise and look at the Open Compute Project. This was started just four years ago by Facebook and is creating the hardware equivalent of open source software like Linux.

Facebook found themselves wanting to do things in their data centers that were not being satisfied by Cisco, Dell, HP or the other traditional vendors of switches and routers. They were undergoing tremendous growth and their traffic was increasing faster than their networks could accommodate.

So Facebook followed the trend set by other large companies like Google, Amazon, Apple, and Microsoft, and set off to design their own data center and data equipment. Facebook had several goals. They wanted to make their equipment far more energy efficient because data centers are huge generators of heat and they were using a lot of energy to keep servers cool and were looking for a greener solution. They also wanted to create routers and switches that were fast, yet simple and basic, and they wanted to control them by centralized software – which differed from the market who built the brains into each network router. This made Facebook one of the pioneers in software defined networks (SDN).

And they succeeded; they developed new hardware and software that allowed them to handle far more data than they could have done with what was on the market at the time. But then Facebook took an extraordinary step and decided to make what they had created available to everybody else. Jonathan Heiliger at Facebook came up with the idea of making their hardware  open source. Designing better data centers was not a core competency for Facebook and he figured that the company would benefit in the future if other outside companies joined them in searching for better data center solutions.

This was a huge contrast to what Google was doing. Google believes that hardware and software are their key differentiators in the market, and so they have kept everything they have developed proprietary. But Facebook had already been using open source software and they saw the benefits of collaboration. They saw that when numerous programmers worked together the result was software that worked better with less bugs and that could be modified quickly, as needed, by bringing together a big pool of programming resources. And they thought this same thing could happen with data center equipment.

And they were right. Their Open Compute Project has been very successful and has drawn in other large partners. Companies like Apple, HP, and Microsoft now participate in the effort. It has also drawn in large industry users like Wall Street firms who are some of the largest users of data center resources. Facebook says that they have saved over $2 billion in data center costs due to the effort and their data centers are using significantly less electricity per computation than before.

And a new supply chain has grown around the new concept. Any company can get access to the specifications  and design their own version of the equipment. There are manufacturers ready to build anything that comes out of the process, meaning that all of the companies in this collaborative effort have bypassed the traditional telecom vendors in the process and work directly with a factory to produce their gear.

This effort has been very good for these large companies, and good for the nation as a whole because through collaboration these companies have pushed the limits on data center systems to make them less expensive and more efficient. They claim that for now they have leapt forward past Moore’s law and are ahead of the curve.

But as I wrote earlier, this leaves out the rest of the world. Smaller carriers cannot take advantage of this process. Small companies don’t have the kind of staff that can work with the design specs, and no factory is going to make a small batch of routers. While the equipment and controlling hardware is open source, each large member is building different equipment and none of it is available on the open market. And small companies wouldn’t know what to do with the hardware if they got it, because it’s controlled by open source software that doesn’t come with training or manuals.

So smaller carriers are still buying from Cisco and the traditional switch and router makers. The small carriers can still find what they need in the market. But if you look ten years forward this is going to become a problem. Companies like Cisco have always funded their next generation of equipment by working with one or two large customers to develop better solutions. The rest of Cisco’s customers would then get the advantages of this effort as the new technology was rolled out to everybody else. But the largest users of routers and switches are no longer using the traditional manufacturers. That is going to mean less innovation over time in the traditional market. It also means that the normal industry vendors aren’t going to have the huge revenue streams from large customers to make gear affordable for everybody.