The Battle of the Routers

Cisco routerThere are several simultaneous forces tugging at companies like Cisco which make network routers. Cloud providers like Amazon and CloudFlare are successfully luring large businesses to move their IT functions from local routers to large data centers. Meanwhile, other companies like Facebook are pushing small cheap routers using open source software. But Cisco is fighting back with their push for fog computing which will place smaller function-specific routers near to the source of data at the edge.

Cloud Computing.

Companies like Amazon and CloudFlare have been very successful at luring companies to move their IT functions into the cloud. It’s incredibly expensive for small and medium companies to afford an IT staff or outsourced IT consultants, and the cloud is reducing both hardware and people costs for companies. CloudFlare alone last year announced that it was adding 5,000 new business customers per day to its cloud services.

There are several trends that are driving this shift to data centers. First, the cloud companies have been able to emulate with software what formerly took expensive routers at a customer’s location. This means that companies can get the same functions done for a fraction of the cost of doing IT functions in-house. The cloud companies are using simpler, cheaper routers that offer brute computing power which also are becoming more energy efficiency. For example, Amazon has designed all of the routers used in its data centers and doesn’t buy boxes from the traditional router manufacturers.

Businesses are also using this shift as an opportunity to unbundle from the traditional large software packages. Businesses historically have signed up for a suite of software from somebody like Microsoft or Oracle and would live with whatever those companies offered. But today there is a mountain of specialty software that outperforms the big software packages for specific functions like sales or accounting. Both the hardware and the new software are easier to use at the big data centers and companies no longer need to have staff or consultants who are Cisco certified to sit between users and the network.

Cheap Servers with Open Source Software.

Not every company wants to use the cloud and Cisco has new competition for businesses that want to keep local servers. Just during this last week both Facebook and HP announced that they are going to start marketing their cheaper routers to enterprise customers. Like most of the companies today with huge data centers, Facebook has developed its own hardware that is far cheaper than traditional routers. These cheaper routers are brute-force computers stripped of everything extraneous and that have all of their functionality defined by free open source software; customers are able to run any software they want. HP’s new router is an open source Linux-based router from their long-time partner Accton.

Cisco and the other router manufacturers today sell a bundled package of hardware and software and Facebook’s goal is to break the bundle. Traditional routers are not only more expensive than the new generation of equipment, but because of the bundle there is an ongoing ‘maintenance fee’ for keeping the router software current. This fee runs as much as 20% of the cost of the original hardware annually. Companies feel like they are paying for traditional routers over and over again, and to some extent they are.

These are the same kinds of fees that were common in the telecom industry historically with companies like Nortel and AT&T / Lucent. Those companies made far more money off of maintenance after the sale than they did from the original sales. But when hungry new competitors came along with a cheaper pricing model, the profits of those two companies collapsed over a few years and brought down the two largest companies in the telecom space.

Fog Computing.

Cisco is fighting back by pushing an idea called fog computing. This means having limited-function routers on the edge of the network to avoid having to ship all data to some remote cloud. The fog computing concept is that most of the data that will be collected by the Internet of Things will not necessarily need to be sent to a central depository for processing.

As an example, a factory might have dozens of industrial robots, and there will be monitors that constantly monitor them to spot troubles before they happen. The local fog computing routers would process a mountain of data over time, but would only communicate with a central hub when they sense some change in operations. With fog computing the local routers would process data for the one very specific purpose of spotting problems, which would save the factory-owner from paying for terabits of data transmission, while still getting the advantage of being connected to a cloud.

Fog computing also makes sense for applications that need instantaneous feedback, such as with an electric smart grid. When something starts going wrong in an electric grid, taking action immediately can save cascading failures, and microseconds can make a difference. Fog computing also makes sense for applications where the local device isn’t connected to the cloud 100% of the time, such as with a smart car or a monitor on a locomotive.

Leave it Cisco to find a whole new application for boxes in a market that is otherwise attacking the boxes they have historically built. Fog computing routers are mostly going to be smaller and cheaper than the historical Cisco products, but there is going to be a need for a whole lot of them when the IoT becomes pervasive.

Beyond a Tipping Point

Cloud_computing_icon_svgA few weeks ago I wrote a blog called A Tipping Point for the Telecom Industry that looked at the consequences of the revolution in technology that is sweeping our industry. In that blog I made a number of predictions about the natural consequences for drastically cheaper cloud services such as the mass migration of IT services to the cloud, massive consolidation of switch and router makers, a shift to software defined networks and the consequent expansion explosion in specialized Cloud software.

I recently read an interview in Business Insider with Matthew Price, the founder of CloudFlare. It’s a company that many of you will never have heard of, but which today is carrying 5% of the traffic on the web and growing rapidly. CloudFlare started as a cyber-security service for businesses and its primary product helped companies fend off hacker attacks. But the company has also developed a suite of other cloud services. The combination of services has been so effective that the company says it has recently been adding 5,000 new customers per day and is growing at an annual rate of 450%.

In that interview Price pointed out two trends that define how quickly the traditional market is changing. The first trend is that the functions served traditionally by hardware from companies like Cisco and HP are moving to the cloud to companies like Amazon and CloudFlare. The second is that companies are quickly unbundling from traditional software packages.

CloudFlare is directly taking on the router and switching functions that have been served most successfully by Cisco. CloudFlare offers services such as routing and switching, load balancing, security, DDoS mitigation and performance acceleration. But by being cloud-based, the CloudFlare services are less expensive, nimbler and don’t require detailed knowledge of Cisco’s proprietary software. Cisco has had an amazing run in the industry and has had huge earnings for decades. Its model has been based upon performing network functions very well, but at a cost. Cisco sells fairly expensive boxes that then come with even more expensive annual maintenance agreements. Companies also need to hire technicians and engineers with Cisco certifications in order to operate a Cisco network.

But the same trends that are dropping the cost of cloud services exponentially are going to kill Cisco’s business model. It’s now possible for a company like CloudFlare to use brute computing power in data centers to perform the same functions as Cisco. Companies no longer need to buy boxes and only need to pay for the specific network functions they need. And companies no longer need to rely on expensive technicians with a Cisco bias. Companies can also be nimble and can change the network on the fly as needed without having to wait for boxes and having to plan for expensive network cutovers.

This change is a direct result of cheaper computing resources. The relentless exponential improvements in most of the major components of the computer world have resulted in a new world order where centralized computing in the cloud is now significantly cheaper than local computing. I summed it up in my last blog saying that 2014 will be remembered as the year the cloud won. It will take a few years, but a cloud that is cheaper today and that is going to continue to get exponentially cheaper will break the business models for companies like Cisco, HP, Dell and IBM. Where there were hundreds of companies making routers and other network components there will soon be only a few companies – those that are the preferred vendors of the companies that control the cloud.

The reverse is happening with software. Large corporations for the last few decades have largely used giant software packages from SAP, Oracle and Microsoft. These huge packages integrated all of the software functions of a business from database, CRM, accounting, sales and operations. But these software packages were incredibly expensive. They were proprietary and cumbersome to learn. And they never exactly fit what a company wanted and it was typical for the company to bend to meet the limitations of the software instead of changing the software to fit the company.

But this is rapidly changing because the world is being flooded by a generation of new software that handle the individual functions better than was done by the big packages. There are now dozens of different collaborations platforms available. There are numerous packages for the sales and CRM function. There are specialized packages for accounting, human resources and operations.

All of these new software packages are made for the cloud. This makes them cheaper to use and for the most part easier to learn and more intuitive to use. They are readily customizable by each company to fit their culture and needs. For the most part the new world of software is built from the user interface backwards, meaning that the user interface is made as easy and intuitive as possible. The older platforms were built with centralized functions in mind first and ended up with a lot of training required for users.

All of this means that over the next decade we are going to see a huge shift in the corporate landscape. We are going to see a handful of cloud providers performing all of the network functions instead of hundreds of box makers. And in place of a few huge software companies we are going to see thousands of specialized software companies selling into niche markets and giving companies cheaper and better software solutions.

More New Technologies

Math_equation_dice_d6I periodically report on new technologies that I find interesting. This past week I ran across several new technologies that seem pretty revolutionary and which could all result in significant improvements on our lives.

 

First is a new green technology. Scientists at the University of Toronto have developed something they are calling colloidal quantum dots. These new materials have the potential to revolutionize solar cell technology. Today’s solar cells all work by juxtaposing two types of materials, an n-type material that is rich in electrons and a p-type material that is low in electrons. Solar energy then creates a current by taking the energy from sunlight and moving electrons from the high electron to low electron materials. However, until now, all n-type materials have lost potency when exposed to air and thus have had to be sealed into the solar cells that we are familiar with. But with colloidal quantum dots we might be able to have cheap solar cells everywhere. Picture having them embedded into outdoor paint so that every roof, home, bridge or cell tower could be generating electricity.

 

Next is Spansion which has developed and is manufacturing energy harvesting chips that can generate enough electricity to power themselves. They generate small amounts of electricity through different techniques such as taking advantage of vibrations, sunlight or differences in heat. This is one of the breakthroughs that have been needed to unleash the Internet of Things. Without this breakthrough every IoT sensor would need its own battery, and replacing those batteries was a cost barrier to realistic deployment of sensor networks. But self-powering chips make it possible to deploy sensor networks that can monitor crops, herds, pollution or just about anything else.

 

Another big breakthrough comes from HP who is calling it ‘The Machine”. This brings together a number of different technologies that is going to revolutionize the computers that we use to process large amounts of data. HP has developed a new computer from scratch. It uses specialized core processors rather than a series of generic processors. It will use photonics rather than electronics and will eliminate copper wiring. It will use memristors for a unified memory that is as fast as RAM but that can also store data like a flash drive. And it has a 3D architecture that packs components closer than can be done using traditional flat chipsets.

 

All of these changes will result in servers that are about 6 times faster than today’s best server and that uses only one eightieth of the power and requiring significantly less space. Probably the most significant aspect of this is the reduced need for power. Today’s network of data centers have often been built where power is the cheapest, but by cutting power consumption by a factor of 80 then any closet can probably become a small data center. It’s been reported that both Google and Amazon are working on their own versions of new servers and they may very well be doing something similar. But HP is the first to announce the specifics. HP hopes to be able to ship these by 2018.

 

Finally, math gets a headline because a company called Code On Technologies promises to use math to speed up existing data transmissions. Most people probably don’t realize how much time and energy is spent during a data transmission today to reassemble a data stream at the receiving end. Bits essentially get numbered and the receiving end of each transmission looks until it finds all of the needed bits before passing on information. The process is quite inefficient due to searches for missing packets and this reassembly is done over and over in the Internet network as a piece of data goes from device to device.

 

Code On has developed a technique that instead will code data into a mathematical equation. Rather than ‘number’ the bits it will assign identity to packets in terms of the solution to an equation. What this means on the receiving end is that there no longer will have to be a constant search for missing packets since the receiver can assume what was in the missing packets by reconstructing the answer to the equation. This sounds esoteric, but it could improve the transmission speeds on current networks by as much as twenty times by vastly improving the process or reconstructing the data at the receiving end of each transaction. This could make for much faster satellite or WiFi networks without having to change those networks. This makes a math nerd smile!