2017 Technology Trends

Alexander_Crystal_SeerI usually take a look once a year at the technology trends that will be affecting the coming year. There have been so many other topics of interest lately that I didn’t quite get around to this by the end of last year. But here are the trends that I think will be the most noticeable and influential in 2017:

The Hackers are Winning. Possibly the biggest news all year will be continued security breaches that show that, for now, the hackers are winning. The traditional ways of securing data behind firewalls is clearly not effective and firms from the biggest with the most sophisticated security to the simplest small businesses are getting hacked – and sometimes the simplest methods of hacking (such as phishing for passwords) are still being effective.

These things run in cycles and there will be new solutions tried to stop hacking. The most interesting trend I see is to get away from storing data in huge data bases (which is what hackers are looking for) and instead distributing that data in such a way that there is nothing worth stealing even after a hacker gets inside the firewall.

We Will Start Talking to Our Devices. This has already begun, but this is the year when a lot of us will make the change and start routinely talking to our computer and smart devices. My home has started to embrace this and we have different devices using Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa. My daughter has made the full transition and now talks-to-text instead of screen typing, but us oldsters are catching up fast.

Machine Learning Breakthroughs will Accelerate. We saw some amazing breakthroughs with machine learning in 2016. A computer beat the world Go champion. Google translate can now accurately translate between a number of languages. Just this last week a computer was taught to play poker and was playing at championship level within a day. It’s now clear that computers can master complex tasks.

The numerous breakthroughs this year will come as a result of having the AI platforms at Google, IBM and others available for anybody to use. Companies will harness this capability to use AI to tackle hundreds of new complex tasks this year and the average person will begin to encounter AI platforms in their daily life.

Software Instead of Hardware. We have clearly entered another age of software. For several decades hardware was king and companies were constantly updating computers, routers, switches and other electronics to get faster processing speeds and more capability. The big players in the tech industry were companies like Cisco that made the boxes.

But now companies are using generic hardware in the cloud and are looking for new solutions through better software rather than through sheer computing power.

Finally a Start of Telepresence. We’ve had a few unsuccessful shots at telepresence in our past. It started a long time ago with the AT&T video phone. But then we tried using expensive video conference equipment and it was generally too expensive and cumbersome to be widely used. For a while there was a shot at using Skype for teleconferencing, but the quality of the connections often left a lot to be desired.

I think this year we will see some new commercial vendors offering a more affordable and easier to use teleconferencing platform that is in the cloud and that will be aimed at business users. I know I will be glad not to have to get on a plane for a short meeting somewhere.

IoT Technology Will Start Being in Everything. But for most of us, at least for now it won’t change our lives much. I’m really having a hard time thinking I want a smart refrigerator, stove, washing machine, mattress, or blender. But those are all coming, like it or not.

There will be More Press on Hype than on Reality. Even though there will be amazing new things happening, we will still see more press on technologies that are not here yet rather than those that are. So expect mountains of articles on 5G, self-driving cars and virtual reality. But you will see fewer articles on the real achievements, such as talking about how a company reduced paperwork 50% by using AI or how the average business person saved a few trips due to telepresence.

Technology and Telecom Jobs

PoleIn case you haven’t noticed, the big companies in the industry are cutting a lot of jobs – maybe the biggest job cuts ever in the industry. These cuts are due to a variety of reasons, but technology change is a big contributor.

There have been a number of announced staff cuts by the big telecom vendors. Cisco recently announced it would cut back as many as 5,500 jobs, or about 7% of its global workforce. Cisco’s job cuts are mostly due to the Open Compute Project where the big data center owners like Facebook, Amazon, Google, Microsoft and others have turned to a model of developing and directly manufacturing their own routers and switches and data center gear. Cloud data services are meanwhile wiping out the need for corporate data centers as companies are moving most of their computing processes to the much more efficient cloud. Even customers that are still buying Cisco boxes are cutting back since the technology now provides a huge increase of capacity over older technology and they need fewer routers and switches.

Ericsson has laid off around 3,000 employees due to falling business. The biggest culprit for them is SDNs (Software Defined Networks). Most of the layoffs are related to cell site electronics. The big cellular companies are actively converting their cell sites to centralized control with the brains in the core. This will enable these companies to make one change and have it instantly implemented in tens of thousands of cell sites. Today that process requires upgrading the brains at each cell site and also involves a horde of technicians to travel to and update each site.

Nokia plans to layoff at least 3,000 employees and maybe more. Part of these layoffs are due to final integration with the purchase of Alcatel-Lucent, but the layoffs also have to do with the technology changes that are affecting every vendor.

Cuts at operating carriers are likely to be a lot larger. A recent article published in the New York Times reported that internal projections from inside AT&T had the company planning to eliminate as many as 30% of their jobs over the next few years, which would be 80,000 people and the biggest telco layoff ever. The company has never officially mentioned a number but top AT&T officials have been warning all year that many of the job functions at the company are going to disappear and that only nimble employees willing to retrain have any hope of retaining a long-term job.

AT&T will be shedding jobs for several reasons. One is the big reduction is technicians needed to upgrade cell sites. But an even bigger reason is the company’s plans to decommission and walk away from huge amounts of its copper network. There is no way to know if the 80,000 number is valid, but even a reduction half that size would be gigantic.

And vendor and carrier cuts are only a small piece of the cuts that are going to be seen across the industry. Consider some of the following trends:

  • Corporate IT staffs are downsizing quickly from the move of computer functions to the cloud. There have been huge number of technicians with Cisco certifications, for example, that are finding themselves out of work as their companies eliminate the data centers at their companies.
  • On the flip side of that, huge data centers are being built to take over these same IT functions with only a tiny handful of technicians. I’ve seen reports where cities and counties gave big tax breaks to data centers because they expected them to bring jobs, but instead a lot of huge data centers are operating with fewer than ten employees.
  • In addition to employees there are fleets full of contractor technicians that do things like updating cell sites and these opportunities are going to dry up over the next few years. There will always be opportunities for technicians brave enough to climb cell towers, but that is not a giant work demand.

It looks like over the next few years that there are going to be a whole lot of unemployed technicians. Technology companies have always been cyclical and it’s never been unusual for engineers and technicians to have worked for a number of different vendors or carriers during a career, yet mostly in the past when there was a downsizing in one part of the industry the slack was picked up somewhere else. But we might be looking at a permanent downsizing this time. Once SDN networks are in place the jobs for those networks are not coming back. Once most IT functions are in the cloud those jobs aren’t coming back. And once the rural copper networks are replaced with 5G cellular those jobs aren’t coming back.

Looking Closer at 5G

SONY DSCCisco recently released a white paper titled Cisco 5G Vision Series: Laying the Foundation for New Technologies, Use Cases, and Business Models that lays out their vision of how the cellular industry can migrate from 4G to 5G. It’s a highly technical read and provides insight on how 5G might work and when we might see it in use.

As the white paper points out, the specific goals of 5G are still in the process of being developed. Both 4G and 5G are basically a set of detailed standards used to make sure devices can work on any network meeting the standards. Something that very few people realize is that almost none of the supposed 4G networks in this country actually meet the 4G standards. We are just now seeing the deployment around the world of the first technologies – LTE-Advanced and WIMAX 16m – that meet the original 4G standards. It’s been typical for cellular providers to claim to have 4G when they’ve only met some tiny portion of the standard.

And so, long before we see an actual 5G deployment we are first going to see the deployment of LTE-Advanced followed by generations of improvements that are best described as pre-5G (just as most of what we have today is pre-4G). This evolution means that we should expect incremental improvements in the cellular networks, not a big swooping overhaul.

The paper makes a very clear distinction between indoor 5G and outdoor 5G (which is cellular service). Cisco says that already today that 80% of cellphone usage is done indoors, mostly using WiFi. They envision that in places with a lot of people, like stadiums, shopping centers or large business buildings, that there will be a migration from WiFi to millimeter wave spectrum using the 5G standard. This very well could ultimately result in gigabit speeds on devices with the right antennas to receive that signal.

But these very fast indoor speeds are going to be limited to those places where it’s economically feasible to deploy multiple small cells – and places that have good fiber backhaul. That’s going to mean places with lots of demand and the willingness to pay for such deployments. So you might see fast speeds inside wireless in hospitals, but you are not going to see gigabit speeds while waiting for your car to be repaired or while sitting in the dentist waiting room. And most importantly, you are not going to see gigabit speeds using millimeter wave spectrum outside. All of the early news articles talking about having outdoor gigabit cellular speeds were way off base. This misunderstanding is easy to understand since the press releases from cellular companies have been nebulous and misleading.

So what can be expected outdoors on our cell phones? Cisco says that the ultimate goal of 5G is to be able to deliver 50 Mbps speeds everywhere. At the same time, the 5G standards have the goal of being able to handle a lot more connections at a given cell site. That goal will mean better reception at football games, but it also means a lot more connections will be available to connect to smart cars or Internet of Things devices.

But don’t expect much faster cellular speeds for quite some time. Remember that the goal of 4G was to deliver about 15 Mbps speeds everywhere. And yet today, the average LTE connection in the US is at about half of that speed. The relatively slow speeds of today’s LTE are due to a number of different reasons. First, is the fact that most cell sites are still running pre-4G technology. The willingness of the cellular companies to buy sufficient bandwidth backhaul at cell sites is also a big contributor. I’ve seen in the press that both Verizon and AT&T are looking for ways to reduce backhaul costs – that’s thought to be the major motivation for Verizon to buy XO Communications. Another major issue is that existing cell sites are too far apart to deliver fast data speeds, and it will require a massive deployment of small cell sites (and the accompanying fiber backhaul) to fix the spacing problem.

So long before we see 50 Mbps cellular speeds we will migrate through several generations of incremental improvements in the cellular networks. We are just now seeing the deployment of LTE-Advanced which will finally bring 4G speeds. After that, Cisco has identified what looks to be at least three or four steps of improvements that we will see before we achieve actual 5G cellular.

How long might all of this take? The industry is scheduled to finalize the 5G standards by 2020, and perhaps a little sooner. It looks like there will be a faster push to find millimeter wave solutions for indoor 5G, so we might see those technologies coming first. But it has taken us a decade since the large cellular companies announced deployment of 4G cellular until we are finally starting to see networks that meet that standard. I can’t imagine that the 5G migration will go any faster. And even when 5G gets here, it’s going to hit urban areas long before it hits rural areas. One doesn’t have to drive too far into the country today to find places that are still operating at 3G.

Upgrading to 5G in steps will be expensive for the cellular providers and they are not likely to implement changes too quickly. We will likely see a series of incremental improvements, like they have been doing for many years. So it would not be surprising to be at least until 2030 until there is a cellular system in place that fully meets the 5G standard. Of course, long before then the marketing departments of the wireless providers will tell us that 5G is here – and when they do, everybody looking for blazingly fast cellphone speeds are going to be disappointed.

Cisco’s Latest Web Predictions

cheetah-993774Cisco recently published their annual Visual Networking Index and as usual it’s full of interesting facts and predictions. Here are a few of the key highlights that I think small carriers will find interesting:

Busy-hour (or the busiest 60–minute period in a day) Internet traffic increased 51 percent in 2015, compared with 29–percent growth in average traffic. And it’s expected to continue to grow faster with Cisco predicting that by 2020 busy hour traffic will have increased 4.6 times while overall web usage will only double. This is a big change for network providers. Since the advent of web video we’ve seen the evenings become the busiest times on the web, but this trends shows that the evening usage is going to be far greater than the rest of the day. If a network wants to offer a satisfactory service they must design to satisfy the busy evening hours, which in four short years will be over four times busier than today.

Telco companies remember that this was the same historical pattern for voice traffic and now we see the same thing with residential broadband. It means networks must be engineered for the busy hour and are underutilized the rest of the time. Failure to design for this growth means customer dissatisfaction during the busiest hours. It also implies growing demand for faster speeds.

IP video traffic will be 82 percent of all consumer Internet traffic by 2020, up from 70 percent in 2015. As you might expect, much of the increased data traffic on the web will be driven by video and more people use the web for entertainment.

Globally, Internet traffic will reach 21 GB per capita by 2020, up from 7 GB per capital in 2015. This demonstrates that the total amount of data on the web is going to continue to grow at a torrid pace. Part of this growth will come by adding new users to the web, but web traffic everywhere is still growing rapidly.

Broadband speeds will nearly double by 2020 . . . global fixed broadband speeds will reach 47.4 Mbps, up from 24.7 Mbps in 2015. So, not only Internet volumes grow, but customers are going to demand faster speeds. These numbers are a little deceptive in that they combine business and residential fixed broadband speeds together. But still, service providers need to be prepared to increase customer speeds to keep them happy. Expect networks that can’t increase speeds to grow increasingly unpopular.

Business IP traffic will grow at a CAGR of 18% from 2015 to 2020. It’s easy to assume that video is causing consumer data usage to grow much faster than business usage, but business broadband demand is growing almost as quickly as consumer broadband demand.

Smartphone traffic will exceed PC traffic by 2020. This is pretty amazing considering that in 2015 PCs drove 53% of all web traffic while smartphones generated only 8%. But by 2020 Cisco is predicting that traffic from PCs will fall to 29% and traffic from smartphones will grow to 30%. Of course, in North America with our extensive WiFi, a lot of this smartphone traffic will end up on landline connections. To reach these numbers, mobile broadband usage will grow 53% per year through 2020.

The Death of 2.4 GHz WiFi?

Wi-FiIt’s been apparent for a few years that the 2.4 GHz band of WiFi is getting more crowded. The very thing that has made the spectrum so useful – the fact that it allows multiple users to share the spectrum at the same time – is now starting to make the spectrum unusable in a lot of situations.

Earlier this year Apple and Cisco issued a joint paper on best network practices for enterprises and said that “the use of the 2.4 GHz band is not considered suitable for use for any business and/or mission critical enterprise applications.” They recommend that businesses avoid the spectrum and instead use the 5 GHz spectrum band.

There are a number of problems with the spectrum. In 2014 the Wi-Fi Alliance said there were over 10 billion WiFi-enabled devices in the world with 2.3 billion new devices shipping each year. And big plans to use WiFi to connect IoT devices means that the number of new devices is going to continue to grow rapidly.

And while most of the devices sold today can work with both the 2.4 GHz and the 5 GHz spectrum, a huge percentage of devices are set to default to several channels of the 2.4 GHz spectrum. This is done so that the devices will work with older WiFi routers, but it ends up creating a huge pile of demand in only part of the spectrum. Many devices can be reset to other channels or to 5 GHz, but the average user doesn’t know how to make the change.

There is no doubt that the spectrum can get full. I was in St. Petersburg, Florida this past weekend and at one point I saw over twenty WiFi networks, all contending for the spectrum. The standard allows that each user on each of these networks will get a little slice of available bandwidth, which leads to the degradation of everyone using it in a local neighborhood. And in addition to those many networks I am sure there were many other devices trying to use the spectrum. The WiFi spectrum band is also filled with uses by Bluetooth devices, signals from video cameras and is one of the primary bands of interference emitted by microwave ovens.

We are an increasingly wireless society. It was only a decade or so ago where people were still wiring new homes with Category 5 cable so that the whole house could get broadband. But we’ve basically dropped the wires in favor of connecting everything through a few channels of WiFi. For those that in crowded areas like apartments, dorms, or within businesses, the sheer number of WiFi devices within a small area can be overwhelming.

I’m not sure there is any really good long-term solution. Right now there is a lot less contention in the 5 GHz band, but one can imagine that in less than a decade that it will also be just as full as the 2.5 GHz spectrum today. We just started using the 5 GHz spectrum in our home network and saw a noticeable improvement. But soon everybody will be using it as much as the 2.4 GHz spectrum. Certainly the FCC can put bandaids on WiFi by opening up new swaths of spectrum for public use. But each new band of spectrum used is going to quickly get filled.

The FCC is very aware of the issues with 2.4 GHz spectrum and several of the Commissioners are pushing for the use of 5.9 GHz spectrum as a new option for public spectrum. But this spectrum which has been called dedicated short-range communications service (DSRC) was set aside in 1999 for use by smart vehicles to communicate with each other to avoid collisions. Until recently the spectrum has barely been used, but with the rapid growth of driverless cars we are finally going to see a big demand for the spectrum – and one that we don’t want to muck up with other devices. I, for one, do not want my self-driving car to have to be competing for spectrum with smartphones and IoT sensors in order to make sure I don’t hit another car.

The FCC has a big challenge in front of them now because as busy as WiFi is today it could be vastly more in demand decades from now. At some point we may have to face the fact that there is just not enough spectrum that can be used openly by everybody – but when that happens we could stop seeing the amazing growth of technologies and developments that have been enabled by free public spectrum.

The Ever-Growing Internet

The InternetI spent some time recently looking through several of Cisco’s periodic predictions about the future of the Internet. What is most fascinating is that they are predicting continuing rapid growth for almost every kind of Internet traffic. This is certainly a warning to all network owners – a lot more bandwidth usage will be coming your way.

Cisco predicts that total worldwide Internet usage will grow from 72 Exabytes (an Exabyte being one billion Gigabytes) per month in 2015 to 168 Exabytes per month in 2019. That’s an astounding 33% growth per year. They published a short chart of the history of global Internet bandwidth which is eye-popping. Following are some historical and predicted statistics of worldwide bandwidth usage:

  • 1992 100 GB per day
  • 1997 100 GB per hour
  • 2002 100 GB per second
  • 2007 2,000 GB per second
  • 2014 16,144 GB per second
  • 2019 51,794 GB per second

We know that the current bandwidth usage on the Internet has been driven by an explosion of residential video consumption. Cisco predicts that video will keep growing at a rapid pace. They predict that video bandwidth worldwide will grow from 40 Exabytes per month in 2015 to 140 Exabytes per month in 2019, an increase of 37% per year. Those volumes include all kinds of IP video including Netflix type services, IP Video on Demand, video files exchanged through file sharing, video-streamed gaming, and videoconferencing.

Perhaps the fastest growing segment of the Internet is Machine-to-Machine traffic. Cisco predicts M2M traffic will grow from 0.5 Petabytes (a Petabyte is 1 million Gigabytes) per month in 2015 to 4.6 Petabytes per month in 2019, an astounding 210% annual increase. The Internet has always had a core of M2M traffic as the devices that run the web communicate with each other. But all of the billions of devices we are now adding to the web annually also do some coordination. This can vary from the big bandwidth uses like smart cars to a smartphone or PC that is checking to see if it has the latest version of a software update.

Cisco also predicts that Internet speeds will get faster. For example, for North America they predict that from 2014 to 2019 the percentage of homes that can buy data speeds faster than 10 Mbps will grow from 58% to 74%, those that buy speeds greater than 25 Mbps will grow 33% to 45% and those that buy data speeds faster than 100 Mbps will grow from 2% to 8%.

They aren’t quite as rosy for cellular data speeds. They predict that North American speeds will grow from an average of 3 Mbps in 2015 to 6.4 Mbps in 2019. But they show that mobile devices now carry the majority of the data traffic worldwide. In 2014 mobile devices carried 54% of worldwide data traffic and by 2019 they predict that mobile devices will carry about 67% of worldwide traffic. It’s important to remember that outside of the US and Europe that mobile devices are the predominant gateway to broadband usage. Cisco also shows that the vast majority of mobile device traffic use WiFi rather than cellular networks.

Perhaps the statistic that matters most to network engineers is that busy hour traffic (the busiest 60-minute period of the day) is growing about 5% faster per year than the growth of average traffic. ISPs need to buy capacity to handle the busy hour and the demands of video traffic are increasingly coming in the busiest hours.

Cisco shows that the volumes of metro traffic (traffic that stays within a region) already passed long-haul traffic in 2014, and by 2019 they predict that 66% of all web traffic will be metro traffic.

What Are Smart Cities?

Jetsons cityI’ve been seeing smart cities mentioned a lot over the last few years and so I spent some time lately reading about them to see what all the fuss is about. I found some of what I expected, but I also found a few surprises.

What I expected to find is that the smart city concept means applying computer systems to automate and improve some of the major systems that operate a city. And that is what I found. The first smart city concept was one of using computers to improve traffic flow, and that is something that is getting better all the time. With sensors in the roads and computerized lights, traffic systems are able to react to the actual traffic and work to clear traffic jams. And I read that this is going to work a lot better in the near future.

But smart city means a lot more. It means constructing interconnected webs of smart buildings that use green technology to save energy or to even generate some of the energy they need. It means surveillance systems to help deter and solve crimes. It means making government more responsive to citizen needs in areas like recycling, trash removal, snow removal, and general interfaces with city systems for permits, taxes, and other needs. And it’s going to soon mean integrating the Internet of Things into a city to perfect the many goals of governments doing a better job.

I also found that this is a worldwide phenomenon and there is some global competition between the US, Europe, China, and India to produce the most efficient smart cities. The conventional wisdom is that smart cities will become the foci of global trade and that smart cities will be the big winners in the battle for global economic dominance.

But I also found a few things I didn’t know. It turns out that the whole smart city concept was dreamed up by companies like IBM, Cisco, and Software AG. The whole phenomenon was not so much a case of cities clamoring for solutions, but rather of these large companies selling a vision of where cities ought to be going. And the cynic in me sees red flags and wonders how much of this phenomenon is an attempt to sell large, overpriced hardware and software systems to cities. After all, governments have always been some of the best clients for large corporations because they will often overpay and have fewer performance demands than commercial customers.

I agree that many of the goals for smart cities sound like great ideas. Anybody who has ever sat at a red light for a long time while no traffic was moving on the cross street has wished that a smart computer could change the light as needed. The savings for a community for more efficient traffic is immense in terms of saved time, more efficient businesses, and less pollution. And most cities could certainly be more efficient when dealing with citizens. It would be nice to be able to put a large piece of trash on the curb and have it whisked away quickly, or to be able to process a needed permit or license online without having to stand in line at a government office.

But at some point a lot of what the smart city vendors are pushing starts to sound like a big brother solution. For example, they are pushing surveillance cameras everywhere tied into software systems smart enough to make sense out of the mountains of captured images. But I suspect that most people who live in a city don’t want their city government spying and remembering everything they do in public any more than we want the NSA to spy on our Internet usage at the federal level.

So perhaps cities can be made too smart. I can’t imagine anybody who minds if cities get more efficient at the things they are supposed to provide for citizens. People want their local government to fix the potholes, deliver drinkable water, provide practical mass transit, keep the traffic moving, and make them feel safe when they walk down the street. When cities go too much past those basic needs they either have crossed the line into being too intrusive in our lives, or they are stepping over the line and competing with things that commercial companies ought to be doing. So I guess we want our cities to be smart, but not too smart.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

The Open Compute Project

The InternetI wrote recently about how a lot of hardware is now proprietary and that the largest buyers of network gear are designing and building their own equipment and bypassing the normal supply chains. My worry about this trend is that all of the small buyers of such equipment are getting left behind and it’s not hard to foresee a day when small carriers won’t be able to find affordable network routers and other similar equipment.

Today I want to look one layer deeper into that premise and look at the Open Compute Project. This was started just four years ago by Facebook and is creating the hardware equivalent of open source software like Linux.

Facebook found themselves wanting to do things in their data centers that were not being satisfied by Cisco, Dell, HP or the other traditional vendors of switches and routers. They were undergoing tremendous growth and their traffic was increasing faster than their networks could accommodate.

So Facebook followed the trend set by other large companies like Google, Amazon, Apple, and Microsoft, and set off to design their own data center and data equipment. Facebook had several goals. They wanted to make their equipment far more energy efficient because data centers are huge generators of heat and they were using a lot of energy to keep servers cool and were looking for a greener solution. They also wanted to create routers and switches that were fast, yet simple and basic, and they wanted to control them by centralized software – which differed from the market who built the brains into each network router. This made Facebook one of the pioneers in software defined networks (SDN).

And they succeeded; they developed new hardware and software that allowed them to handle far more data than they could have done with what was on the market at the time. But then Facebook took an extraordinary step and decided to make what they had created available to everybody else. Jonathan Heiliger at Facebook came up with the idea of making their hardware  open source. Designing better data centers was not a core competency for Facebook and he figured that the company would benefit in the future if other outside companies joined them in searching for better data center solutions.

This was a huge contrast to what Google was doing. Google believes that hardware and software are their key differentiators in the market, and so they have kept everything they have developed proprietary. But Facebook had already been using open source software and they saw the benefits of collaboration. They saw that when numerous programmers worked together the result was software that worked better with less bugs and that could be modified quickly, as needed, by bringing together a big pool of programming resources. And they thought this same thing could happen with data center equipment.

And they were right. Their Open Compute Project has been very successful and has drawn in other large partners. Companies like Apple, HP, and Microsoft now participate in the effort. It has also drawn in large industry users like Wall Street firms who are some of the largest users of data center resources. Facebook says that they have saved over $2 billion in data center costs due to the effort and their data centers are using significantly less electricity per computation than before.

And a new supply chain has grown around the new concept. Any company can get access to the specifications  and design their own version of the equipment. There are manufacturers ready to build anything that comes out of the process, meaning that all of the companies in this collaborative effort have bypassed the traditional telecom vendors in the process and work directly with a factory to produce their gear.

This effort has been very good for these large companies, and good for the nation as a whole because through collaboration these companies have pushed the limits on data center systems to make them less expensive and more efficient. They claim that for now they have leapt forward past Moore’s law and are ahead of the curve.

But as I wrote earlier, this leaves out the rest of the world. Smaller carriers cannot take advantage of this process. Small companies don’t have the kind of staff that can work with the design specs, and no factory is going to make a small batch of routers. While the equipment and controlling hardware is open source, each large member is building different equipment and none of it is available on the open market. And small companies wouldn’t know what to do with the hardware if they got it, because it’s controlled by open source software that doesn’t come with training or manuals.

So smaller carriers are still buying from Cisco and the traditional switch and router makers. The small carriers can still find what they need in the market. But if you look ten years forward this is going to become a problem. Companies like Cisco have always funded their next generation of equipment by working with one or two large customers to develop better solutions. The rest of Cisco’s customers would then get the advantages of this effort as the new technology was rolled out to everybody else. But the largest users of routers and switches are no longer using the traditional manufacturers. That is going to mean less innovation over time in the traditional market. It also means that the normal industry vendors aren’t going to have the huge revenue streams from large customers to make gear affordable for everybody.

The Shift To Proprietary Hardware

OLYMPUS DIGITAL CAMERA

There is a trend in the industry that is not good for smaller carriers. More and more I see the big companies designing proprietary hardware just for themselves. While that is undoubtably good for the big companies, and I am sure that it saves them a lot of money, it is not good for anybody else.

I first started noticing this a few years ago with settop boxes. It used to be that Comcast and the other large cable companies used the same settop boxes as everybody else. And their buying power is so huge that it drove down the cost of the settop boxes for everybody in the industry. It was standard for large companies to put their own name tag on the front of the boxes, but for the most part they were the same boxes that everybody else could buy, from the same handful of manufacturers.

But then I started seeing news releases and stories indicating that the largest cable companies had developed proprietary settop boxes of their own. One driver for this change is that the carriers are choosing different ways to bring broadband to the settop box. Another change is that the big companies are adding different features, and are modifying the hardware to go along with custom software. Cable companies are even experimenting with very non-traditional settop box platforms like Roku or the various game consoles.

I see this same thing going on all over the industry. The cable modems and customer gateways that the large cable companies and the large telcos use are proprietary and designed just for them. I recently learned that the WiFi units that Comcast and other large cable companies are deploying outdoors are proprietary to them. Google has designed its own fiber-the-the-premise equipment. And many companies including Amazon, Facebook, Google, Microsoft, and others are designing their own proprietary routers to use in their cloud data centers.

In all of these cases (and many other that I haven’t listed here), the big companies used to buy off-the-shelf equipment. They might have had a slightly different version of some of the hardware, but not different enough that it made a difference to the manufacturers. Telco has always been an industry where only a handful of companies make any given kind of electronics. Generally, smaller companies bought from whichever vendors the big companies chose, since those vendors had the economy of scale.

But now the big carriers are not only using proprietary hardware, but a lot of them are getting it manufactured for themselves directly, without one of the big vendors in the middle. You can’t blame a large company for this; I am sure they save a lot of money by cutting Alcatel/Lucent, Cisco, and Motorola out of the supply chain. But this tendency is putting a hurt on these traditional vendors and making it harder for vendors to survive.

It’s going to get worse. Currently there is a huge push in many parts of the telecom business to use software-defined networking (SDN) to simplify field hardware and control everything from the cloud. Since the large carriers will shift to SDN networks long before smaller carriers, the big companies will be using very different gear at the edges of the network – and those are the parts of the network that cost the most.

This is a problem for smaller carriers since they often no longer benefit from being able to buy the same devices that the large companies buy to take advantage of their huge economy of scale. Over time this is going to mean the prices for the basic components smaller carriers buy are going to go up. And in the worst case there might not be any vendor that can make a business case for manufacturing a given component for the small carriers. One of the advantages of having healthy large manufacturers in the industry was that they could take a loss on some product lines as long as the whole suite of products they sold made a good profit. That will probably no longer be the case.

I hate to think about where this trend is going to take the industry in five to ten years, and I add it to the list of things that small carriers need to worry about.