Telecom Containers

There is a new lingo being used by the large telecom companies that will be foreign to the rest of the industry – containers. In the simplest definition, a container is a relatively small set of software that performs one function. The big carriers are migrating to software systems that use containers for several reasons, the primary being the migration to software defined networks.

A good example of a container is a software application for a cellular company that can communicate with the sensors used in crop farming. The cellular carrier would install this particular container in cell sites where there is a need to communicate with field sensors but would not install the container at the many cell sites where such communications isn’t needed.

The advantage to the cellular carrier is that they have simplified their software deployment. A rural cell site will have a different set of containers than a small cell site deployed near a tourist destination or a cell site deployed in a busy urban business district.

The benefits of this are easy to understand. Consider the software that operates our PCs. The PC manufacturers fill the machine up with every applications a user might ever want. However, most of us use perhaps 10% of the applications that are pre-installed on our computer. The downside to having so many software components is that it takes a long time to upgrade the software on a PC – my iMac laptop has taken an hour at times to compile a new operating system update.

In a software defined network, the ideal configuration is to move as much of the software as possible to the edge devices – in this particular example, to the cell site. Today every cell site much hold and process all of the software needed by any cell site anywhere. That’s both costly, in terms of computing power needed at the cell site as well as inefficient, in that the cell site are running applications that will never be used. In a containerized network each cell site will run only the modules needed locally.

The cellular carrier can make an update to the farm sensor container without interfering with the other software at a cell site. That adds safety – if something goes wrong with that update, only the farm sensor network will experience a problem instead of possibly pulling down the whole network of cell sites. One of the biggest fears of operating a software defined network is that an upgrade that goes wrong could pull down the entire network. Upgrades made to specific containers are much safer, from a network engineering perspective, and if something goes wrong in an upgrade the cellular carrier can quickly revert to the back-up for the specific container to reestablish service.

The migration to containers makes sense for a big telecom carrier. Each carrier can develop unique containers that defines their specific product set. In the past most carriers bought off-the-shelf applications like voice mail – but with containers they can more easily customize products to operate as they wish.

Like most things that are good for the big carriers, there is a long-term danger from containers for the rest of us. Over time the big carriers will develop their own containers and processes that are unique to them. They’ll create much of this software in-house and the container software won’t be made available to others. This means that the big companies can offer products and features that won’t be readily available to smaller carriers.

In the past the products and features available to smaller ISPs are due to product research done by telecom vendors for the big ISPs. Vendors developed software for cellular switches, voice switches, routers, settop boxes, ONTs and all of the hardware used in the industry. Vendors could justify spending money on software development due to expected sales to the large ISPs. However, as the ISPs migrate to a world where they buy empty boxes and develop their own container software there won’t be a financial incentive for the hardware vendors to put effort into software applications. Companies like Cisco are already adapting to this change and it’s going to trickle through the whole industry over the next few years.

This is just one more thing that will make it a little harder in future years to compete with the big ISPs. Perhaps smaller ISPs can band together somehow and develop their own product software, but it’s another industry trend that will give the big ISPs an advantage over the rest of us.

Consolidation of Telecom Vendors

It looks like we might be entering a new round of consolidation of telecom vendors. Within the last year there have been the following announced consolidation among vendors:

  • Cisco is paying $5.5 billion for Broadsoft, a market leader in cloud services and software for applications like call centers.
  • ADTRAN purchased CommScope, a maker of EPON fiber equipment that is also DOCSIS compliant to work with cable networks.
  • Broadcom is paying $5.9 billion to buy Brocade Communications, a market leader in data storage devices as well as a range of telecom equipment.
  • Arris is buying Ruckus Wireless as part of a spinoff from the Brocade acquisition. Arris has a goal to be the provider of wireless equipment for the large cable TV companies.

While none of these acquisitions will cause any immediate impact on small ISPs, I’ve been seeing analysts predict that there is a lot of consolidation coming in the telecom vendor space. I think most of my clients were impacted to some degree by the last wave of vendor consolidation back around 2000. And that wave of consolidation impacted a lot of ISPs.

There are a number of reasons why the industry might be ripe for a round of mergers and acquisitions:

  • One important technology trend is the move by a lot of the largest ISPs, cable companies and wireless carriers to software defined networking. This means putting the brains to technology into centralized data centers which allows cheaper and simpler electronics at the edge. The advantages of SDN are huge for these big companies. For example, a wireless company could update the software in thousands of cell sites simultaneously instead having to make upgrades at each site. But SDN means less costly and complicated gear.
  • The biggest buyers of electronics are starting to make their own gear. For example, the operators of large data centers like Facebook are working together under the Open Compute Project to create cheap routers and switches for their data centers, which is tanking Cisco’s switch business. In another example, Comcast has designed its own settop box.
  • The big telcos have made it clear that they are going to be backing out of the copper business. In doing so they are going to drastically cut back on the purchase of gear used in the last mile network. This hurts the vendors that supply much of the electronics for the smaller telcos and ISPs.
  • I think we will be seeing an overall shift over the next few decades of more customers being served by cable TV and wireless networks. Spending on electronics in those markets will benefit few small ISPs.
  • There are not a lot of vendors left in the industry today, and so every merger means a little less competition. Just consider FTTH equipment. Fifteen years ago there was more than a dozen vendors working in this space, but over time that has cut in half.

There are a number of reasons why these trends could foretell future trouble for smaller ISPs, possibly within the next decade:

  • Smaller ISPs have always relied on bigger telcos to pave the way in developing new technology and electronics. But if the trend is towards SDN and towards large vendors designing their own gear then this will no longer be the case. Consider FTTP technology. If companies like Verizon and AT&T shift towards software defined networking and electronics developed through collaboration there will be less development done with non-SDN technology. One might hope that the smaller companies could ride the coattails of the big telcos in an SDN environment – but as each large telco develops their own proprietary software to control SDN networks that is likely to not be practical.
  • Small ISPS also rely on larger vendors to buy enough volume of electronics to hold down prices. But as the big companies buy fewer standard electronics the rest of us use you can expect either big price increases or, worse yet, no vendors willing to serve the smaller carrier market. It’s not hard to envision smaller ISPs reduced to competing in the grey market for used and reconditioned gear – something some of my clients already do who are operating ten-year old FTTP networks.

I don’t want to sound like to voice of gloom and I expect that somebody will step into voids created by these trends. But that’s liable to mean smaller ISPs will end up relying on foreign vendors that will not come with the same kinds of prices, reliability or service the industry is used to today.

The Open Compute Project

The InternetI wrote recently about how a lot of hardware is now proprietary and that the largest buyers of network gear are designing and building their own equipment and bypassing the normal supply chains. My worry about this trend is that all of the small buyers of such equipment are getting left behind and it’s not hard to foresee a day when small carriers won’t be able to find affordable network routers and other similar equipment.

Today I want to look one layer deeper into that premise and look at the Open Compute Project. This was started just four years ago by Facebook and is creating the hardware equivalent of open source software like Linux.

Facebook found themselves wanting to do things in their data centers that were not being satisfied by Cisco, Dell, HP or the other traditional vendors of switches and routers. They were undergoing tremendous growth and their traffic was increasing faster than their networks could accommodate.

So Facebook followed the trend set by other large companies like Google, Amazon, Apple, and Microsoft, and set off to design their own data center and data equipment. Facebook had several goals. They wanted to make their equipment far more energy efficient because data centers are huge generators of heat and they were using a lot of energy to keep servers cool and were looking for a greener solution. They also wanted to create routers and switches that were fast, yet simple and basic, and they wanted to control them by centralized software – which differed from the market who built the brains into each network router. This made Facebook one of the pioneers in software defined networks (SDN).

And they succeeded; they developed new hardware and software that allowed them to handle far more data than they could have done with what was on the market at the time. But then Facebook took an extraordinary step and decided to make what they had created available to everybody else. Jonathan Heiliger at Facebook came up with the idea of making their hardware  open source. Designing better data centers was not a core competency for Facebook and he figured that the company would benefit in the future if other outside companies joined them in searching for better data center solutions.

This was a huge contrast to what Google was doing. Google believes that hardware and software are their key differentiators in the market, and so they have kept everything they have developed proprietary. But Facebook had already been using open source software and they saw the benefits of collaboration. They saw that when numerous programmers worked together the result was software that worked better with less bugs and that could be modified quickly, as needed, by bringing together a big pool of programming resources. And they thought this same thing could happen with data center equipment.

And they were right. Their Open Compute Project has been very successful and has drawn in other large partners. Companies like Apple, HP, and Microsoft now participate in the effort. It has also drawn in large industry users like Wall Street firms who are some of the largest users of data center resources. Facebook says that they have saved over $2 billion in data center costs due to the effort and their data centers are using significantly less electricity per computation than before.

And a new supply chain has grown around the new concept. Any company can get access to the specifications  and design their own version of the equipment. There are manufacturers ready to build anything that comes out of the process, meaning that all of the companies in this collaborative effort have bypassed the traditional telecom vendors in the process and work directly with a factory to produce their gear.

This effort has been very good for these large companies, and good for the nation as a whole because through collaboration these companies have pushed the limits on data center systems to make them less expensive and more efficient. They claim that for now they have leapt forward past Moore’s law and are ahead of the curve.

But as I wrote earlier, this leaves out the rest of the world. Smaller carriers cannot take advantage of this process. Small companies don’t have the kind of staff that can work with the design specs, and no factory is going to make a small batch of routers. While the equipment and controlling hardware is open source, each large member is building different equipment and none of it is available on the open market. And small companies wouldn’t know what to do with the hardware if they got it, because it’s controlled by open source software that doesn’t come with training or manuals.

So smaller carriers are still buying from Cisco and the traditional switch and router makers. The small carriers can still find what they need in the market. But if you look ten years forward this is going to become a problem. Companies like Cisco have always funded their next generation of equipment by working with one or two large customers to develop better solutions. The rest of Cisco’s customers would then get the advantages of this effort as the new technology was rolled out to everybody else. But the largest users of routers and switches are no longer using the traditional manufacturers. That is going to mean less innovation over time in the traditional market. It also means that the normal industry vendors aren’t going to have the huge revenue streams from large customers to make gear affordable for everybody.