Consolidation of Telecom Vendors

It looks like we might be entering a new round of consolidation of telecom vendors. Within the last year there have been the following announced consolidation among vendors:

  • Cisco is paying $5.5 billion for Broadsoft, a market leader in cloud services and software for applications like call centers.
  • ADTRAN purchased CommScope, a maker of EPON fiber equipment that is also DOCSIS compliant to work with cable networks.
  • Broadcom is paying $5.9 billion to buy Brocade Communications, a market leader in data storage devices as well as a range of telecom equipment.
  • Arris is buying Ruckus Wireless as part of a spinoff from the Brocade acquisition. Arris has a goal to be the provider of wireless equipment for the large cable TV companies.

While none of these acquisitions will cause any immediate impact on small ISPs, I’ve been seeing analysts predict that there is a lot of consolidation coming in the telecom vendor space. I think most of my clients were impacted to some degree by the last wave of vendor consolidation back around 2000. And that wave of consolidation impacted a lot of ISPs.

There are a number of reasons why the industry might be ripe for a round of mergers and acquisitions:

  • One important technology trend is the move by a lot of the largest ISPs, cable companies and wireless carriers to software defined networking. This means putting the brains to technology into centralized data centers which allows cheaper and simpler electronics at the edge. The advantages of SDN are huge for these big companies. For example, a wireless company could update the software in thousands of cell sites simultaneously instead having to make upgrades at each site. But SDN means less costly and complicated gear.
  • The biggest buyers of electronics are starting to make their own gear. For example, the operators of large data centers like Facebook are working together under the Open Compute Project to create cheap routers and switches for their data centers, which is tanking Cisco’s switch business. In another example, Comcast has designed its own settop box.
  • The big telcos have made it clear that they are going to be backing out of the copper business. In doing so they are going to drastically cut back on the purchase of gear used in the last mile network. This hurts the vendors that supply much of the electronics for the smaller telcos and ISPs.
  • I think we will be seeing an overall shift over the next few decades of more customers being served by cable TV and wireless networks. Spending on electronics in those markets will benefit few small ISPs.
  • There are not a lot of vendors left in the industry today, and so every merger means a little less competition. Just consider FTTH equipment. Fifteen years ago there was more than a dozen vendors working in this space, but over time that has cut in half.

There are a number of reasons why these trends could foretell future trouble for smaller ISPs, possibly within the next decade:

  • Smaller ISPs have always relied on bigger telcos to pave the way in developing new technology and electronics. But if the trend is towards SDN and towards large vendors designing their own gear then this will no longer be the case. Consider FTTP technology. If companies like Verizon and AT&T shift towards software defined networking and electronics developed through collaboration there will be less development done with non-SDN technology. One might hope that the smaller companies could ride the coattails of the big telcos in an SDN environment – but as each large telco develops their own proprietary software to control SDN networks that is likely to not be practical.
  • Small ISPS also rely on larger vendors to buy enough volume of electronics to hold down prices. But as the big companies buy fewer standard electronics the rest of us use you can expect either big price increases or, worse yet, no vendors willing to serve the smaller carrier market. It’s not hard to envision smaller ISPs reduced to competing in the grey market for used and reconditioned gear – something some of my clients already do who are operating ten-year old FTTP networks.

I don’t want to sound like to voice of gloom and I expect that somebody will step into voids created by these trends. But that’s liable to mean smaller ISPs will end up relying on foreign vendors that will not come with the same kinds of prices, reliability or service the industry is used to today.

Technology and Telecom Jobs

PoleIn case you haven’t noticed, the big companies in the industry are cutting a lot of jobs – maybe the biggest job cuts ever in the industry. These cuts are due to a variety of reasons, but technology change is a big contributor.

There have been a number of announced staff cuts by the big telecom vendors. Cisco recently announced it would cut back as many as 5,500 jobs, or about 7% of its global workforce. Cisco’s job cuts are mostly due to the Open Compute Project where the big data center owners like Facebook, Amazon, Google, Microsoft and others have turned to a model of developing and directly manufacturing their own routers and switches and data center gear. Cloud data services are meanwhile wiping out the need for corporate data centers as companies are moving most of their computing processes to the much more efficient cloud. Even customers that are still buying Cisco boxes are cutting back since the technology now provides a huge increase of capacity over older technology and they need fewer routers and switches.

Ericsson has laid off around 3,000 employees due to falling business. The biggest culprit for them is SDNs (Software Defined Networks). Most of the layoffs are related to cell site electronics. The big cellular companies are actively converting their cell sites to centralized control with the brains in the core. This will enable these companies to make one change and have it instantly implemented in tens of thousands of cell sites. Today that process requires upgrading the brains at each cell site and also involves a horde of technicians to travel to and update each site.

Nokia plans to layoff at least 3,000 employees and maybe more. Part of these layoffs are due to final integration with the purchase of Alcatel-Lucent, but the layoffs also have to do with the technology changes that are affecting every vendor.

Cuts at operating carriers are likely to be a lot larger. A recent article published in the New York Times reported that internal projections from inside AT&T had the company planning to eliminate as many as 30% of their jobs over the next few years, which would be 80,000 people and the biggest telco layoff ever. The company has never officially mentioned a number but top AT&T officials have been warning all year that many of the job functions at the company are going to disappear and that only nimble employees willing to retrain have any hope of retaining a long-term job.

AT&T will be shedding jobs for several reasons. One is the big reduction is technicians needed to upgrade cell sites. But an even bigger reason is the company’s plans to decommission and walk away from huge amounts of its copper network. There is no way to know if the 80,000 number is valid, but even a reduction half that size would be gigantic.

And vendor and carrier cuts are only a small piece of the cuts that are going to be seen across the industry. Consider some of the following trends:

  • Corporate IT staffs are downsizing quickly from the move of computer functions to the cloud. There have been huge number of technicians with Cisco certifications, for example, that are finding themselves out of work as their companies eliminate the data centers at their companies.
  • On the flip side of that, huge data centers are being built to take over these same IT functions with only a tiny handful of technicians. I’ve seen reports where cities and counties gave big tax breaks to data centers because they expected them to bring jobs, but instead a lot of huge data centers are operating with fewer than ten employees.
  • In addition to employees there are fleets full of contractor technicians that do things like updating cell sites and these opportunities are going to dry up over the next few years. There will always be opportunities for technicians brave enough to climb cell towers, but that is not a giant work demand.

It looks like over the next few years that there are going to be a whole lot of unemployed technicians. Technology companies have always been cyclical and it’s never been unusual for engineers and technicians to have worked for a number of different vendors or carriers during a career, yet mostly in the past when there was a downsizing in one part of the industry the slack was picked up somewhere else. But we might be looking at a permanent downsizing this time. Once SDN networks are in place the jobs for those networks are not coming back. Once most IT functions are in the cloud those jobs aren’t coming back. And once the rural copper networks are replaced with 5G cellular those jobs aren’t coming back.

Can Small Networks Keep Up?

OpenCompute1I hope this blog doesn’t come across as too negative, but it seems that every few weeks I read something that makes me think, “That is not good for small network owners.” There are a lot of changes going on at the top of our industry that are, at a minimum, worrisome for small network owners.

The biggest users of servers and related hardware have come together to create a consortium called the Open Compute Project that is working to create cheap generic hardware that will replace the expensive servers and switches bought from companies like Cisco and Juniper. This effort was started by Facebook and is likely to completely disrupt that industry. In addition to that effort, a few of the largest companies like Amazon and Google have developed their own proprietary hardware.

Recently a pile of telcos like Verizon, AT&T, Deutsche Telekom, Korea’s SK Telecom, and Equinix have joined the effort. It looks like all of the largest users of this kind of equipment will be buying their hardware from new channels, which is going to devastate the existing vendors.

One might think that this is good news for smaller companies, because in this industry small companies have always ridden the tails of the large companies. We have all benefited by having reasonable prices and a variety of option because those options were created for the big users.

But unfortunately, the Open Compute Project isn’t going to operate that way. Anybody is free to share in the open specifications that are being created, but companies are then expected to modify those specs to meet their needs and then find a way to get the gear built. Probably the top 95% of the market will no longer be buying off-the-shelf servers, which is not good for smaller users. Small companies, meaning anybody smaller then perhaps CenturyLink, will not have the resources to wade through the open source process to make their own hardware.

One might hope that there would still be somebody left to supply all of the smaller users of this equipment, but that flies against the past experience of the industry. Without big buyers of equipment, there is unlikely to be any R&D or new product development to serve a much smaller potential market. There are analysts that believe that companies like Cisco and Juniper will eventually flee the server market.

One has to worry about the general availability of telecom electronics of any kind in the future. The open source movement is not going to stop with servers; over time it will tackle fiber electronics, cable headend, settop boxes, you name it. As the big companies stop buying from vendors we are likely to see a lot of failure among the already reduced field of telecom vendors.

Along with the move to proprietary open source hardware is a similar move towards open source software to control the hardware. Again, one might think that small companies could just use the open source software, but that also doesn’t work the way you might hope. Open source software provides a sprawling mass of options and companies that use it for something like operating servers have select what they want out of it and develop their own package of options. That is way past the abilities or budgets for smaller companies. This is another area today where we benefit for the work done for larger carriers.

Small companies probably feel safe that there are a few vendors around that specialize in serving small carriers today. But many of us that are in the industry know that telecom vendors come and go. Any vendor that gets most of their revenue from the big ISPs is going to be in trouble. And when I look at the vendors used by small companies today you see almost no vendors that were here twenty years ago. The periodic downturns in the industry have always been hard for vendors to weather. There might not be enough volume from small telecom carriers to support healthy vendors for the long haul.

I hope I am wrong about all of this. Each one of these factors would be a cause for some concern. But taken all together, these trends point to a future five and ten years from now where there will be fewer vendors, where it’s going to be harder and more expensive for smaller carriers to buy gear, and where there might not be much dollar incentive for anybody to do the same R&D for small carriers that the big carriers will be doing on their own.

 

The Open Compute Project

The InternetI wrote recently about how a lot of hardware is now proprietary and that the largest buyers of network gear are designing and building their own equipment and bypassing the normal supply chains. My worry about this trend is that all of the small buyers of such equipment are getting left behind and it’s not hard to foresee a day when small carriers won’t be able to find affordable network routers and other similar equipment.

Today I want to look one layer deeper into that premise and look at the Open Compute Project. This was started just four years ago by Facebook and is creating the hardware equivalent of open source software like Linux.

Facebook found themselves wanting to do things in their data centers that were not being satisfied by Cisco, Dell, HP or the other traditional vendors of switches and routers. They were undergoing tremendous growth and their traffic was increasing faster than their networks could accommodate.

So Facebook followed the trend set by other large companies like Google, Amazon, Apple, and Microsoft, and set off to design their own data center and data equipment. Facebook had several goals. They wanted to make their equipment far more energy efficient because data centers are huge generators of heat and they were using a lot of energy to keep servers cool and were looking for a greener solution. They also wanted to create routers and switches that were fast, yet simple and basic, and they wanted to control them by centralized software – which differed from the market who built the brains into each network router. This made Facebook one of the pioneers in software defined networks (SDN).

And they succeeded; they developed new hardware and software that allowed them to handle far more data than they could have done with what was on the market at the time. But then Facebook took an extraordinary step and decided to make what they had created available to everybody else. Jonathan Heiliger at Facebook came up with the idea of making their hardware  open source. Designing better data centers was not a core competency for Facebook and he figured that the company would benefit in the future if other outside companies joined them in searching for better data center solutions.

This was a huge contrast to what Google was doing. Google believes that hardware and software are their key differentiators in the market, and so they have kept everything they have developed proprietary. But Facebook had already been using open source software and they saw the benefits of collaboration. They saw that when numerous programmers worked together the result was software that worked better with less bugs and that could be modified quickly, as needed, by bringing together a big pool of programming resources. And they thought this same thing could happen with data center equipment.

And they were right. Their Open Compute Project has been very successful and has drawn in other large partners. Companies like Apple, HP, and Microsoft now participate in the effort. It has also drawn in large industry users like Wall Street firms who are some of the largest users of data center resources. Facebook says that they have saved over $2 billion in data center costs due to the effort and their data centers are using significantly less electricity per computation than before.

And a new supply chain has grown around the new concept. Any company can get access to the specifications  and design their own version of the equipment. There are manufacturers ready to build anything that comes out of the process, meaning that all of the companies in this collaborative effort have bypassed the traditional telecom vendors in the process and work directly with a factory to produce their gear.

This effort has been very good for these large companies, and good for the nation as a whole because through collaboration these companies have pushed the limits on data center systems to make them less expensive and more efficient. They claim that for now they have leapt forward past Moore’s law and are ahead of the curve.

But as I wrote earlier, this leaves out the rest of the world. Smaller carriers cannot take advantage of this process. Small companies don’t have the kind of staff that can work with the design specs, and no factory is going to make a small batch of routers. While the equipment and controlling hardware is open source, each large member is building different equipment and none of it is available on the open market. And small companies wouldn’t know what to do with the hardware if they got it, because it’s controlled by open source software that doesn’t come with training or manuals.

So smaller carriers are still buying from Cisco and the traditional switch and router makers. The small carriers can still find what they need in the market. But if you look ten years forward this is going to become a problem. Companies like Cisco have always funded their next generation of equipment by working with one or two large customers to develop better solutions. The rest of Cisco’s customers would then get the advantages of this effort as the new technology was rolled out to everybody else. But the largest users of routers and switches are no longer using the traditional manufacturers. That is going to mean less innovation over time in the traditional market. It also means that the normal industry vendors aren’t going to have the huge revenue streams from large customers to make gear affordable for everybody.

The Battle of the Network Switches

Cisco_Media_Convergence_ServersYesterday Facebook announced that it has successfully built an open-source network switch. This is really big news in an industry where Cisco and Juniper together have more or less cornered the switch market. The Facebook switch has been named Wedge and is operated by an open-source software platform they called FBOSS. This has been created as part of the Open Compute Project (OCP) started by Facebook but now involving many other companies. The goal of this project was to radically change the way companies buy hardware and software, and it is starting to achieve those goals.

 

This announcement is going to shake up the $23 billion Ethernet switch market in the same way that the introduction of the softswitch killed the duopoly on voice switches once held by Nortel and Lucent. I’ve written earlier about how the Ethernet switch industry is moving towards software-defined networking (SDN). The goal of SDN is to take features that have baked into hardware, such as security and device management and make those functions software controlled.

 

Cisco has already introduced their own version of SDN and they now have software that will control their various devices. But honestly this is only a modest change for them, because at the end of the day all of their hardware and software is proprietary. We are all very familiar with network engineers who need multiple Cisco certifications just to be able to operate the Cisco gear. Cisco’s SDN doesn’t really change that need for network engineers or lower the cost. It just layers a new software over top of the old platform.

 

The industry was ripe for this change because Cisco has grown into the same kind of company that we saw in Lucent and Nortel at their peak. The Cisco pricing model now includes a permanent 15% annual fee on top of any hardware you buy from them. This fee is ostensibly for upgrades and maintenance, but the people who write the checks for this don’t feel like they are getting much value from these annual checks. This sounds exactly like the kinds of pricing practice we saw in the voice industry when it was a duopoly of Nortel and Lucent.

 

Cisco has been reported to have a 60% profit margin, and so they are ripe for a challenge. Cisco is not going to go away easily and they have been very clever in the way they have shaped the network switch market. That market is operated by and decisions made by switch engineers, all of whom Cisco has made certain have a long list of Cisco certifications. And frankly, the OCP initiative is aimed directly at getting rid of those network engineers, in the same way that cloud computing is doing away with server engineers.

 

Certainly Cisco has already lost the largest customers in the market. Facebook will be going with their own new technology. It’s been reported that Amazon, Microsoft and Google all are working on their own versions of SDN servers as well, although none of them are reported to be headed towards open-sourcing like the OCP initiative. But one would think that this is going to put a massive amount of price pressure on Cisco in a few years, as ought to happen with any company that has gigantic profit margins. There are still going to be a number of network operators who are going to go with traditional Cisco for a while simply because it works and is comfortable for them. But as the OCP hardware becomes readily available and proves able to work in the market it’s going to get harder and harder to justify buying expensive and proprietary servers.

 

It took a full decade for the traditional voice switch manufacturers to fail after the introduction of the softswitch. And Cisco is probably better equipped to fight back against this change than were Nortel and Lucent. But in the early days of the softswitch I saw some of my clients cut their hardware and maintenance costs in half by going with a softswitch and it was obvious then that the newer technology would eventually win. This Facebook announcement is the first day of the decade that is going to transform the way we buy and use network switches.