Augmented vs. Virtual Reality

Escher-6We are about to see the introduction of the new generation of virtual reality machines on the market. Not far behind them will probably be a number of augmented reality devices. These devices are something that network operators should keep an eye on, because they are the next generation of devices that are going to be asking for significant bandwidth.

The term ‘augmented reality’ has been around since the early 1990s and is used to describe any technology that overlays a digital interface over the physical world. Until now, augmented reality has involved projecting opaque holograms to blend into what people see in the real world. Virtual reality takes a very different approach and immerses a person in a fully digital world by projecting stereoscopic 3D images onto a screen in front of your eyes.

A number of virtual reality headsets are going to hit the market late this year into next year:

  • HTC Vive is hoping to hit the market by Christmas of this year. This is being developed in conjunction with Valve. This device will be a VR headset that will incorporate some augmented reality, which will allow a user to move and interact with virtual objects.
  • Oculus Rift, owned by Facebook, is perhaps the most anticipated release and is expected to hit the market sometime in 2016.
  • Sony is planning on releasing Project Morpheus in 1Q 2016. This device will be the first VR device integrated into an existing game console.
  • Samsung will be releasing its Gear VR sometime in 2016. This device is unique in that it’s powered by the Samsung Galaxy smartphone.
  • Raser will be releasing a VR headset based upon open source software that they hope will allow for more content delivery. Dates for market delivery are still not known.

All of these first generation virtual reality devices are for gaming and, at least in the first few generations, that will be the primary use for these devices. Like with any new technology, price is going to be an issue for the first generation devices, but one has to imagine that within a few years these devices might be as common as, or even displace, traditional game consoles. The idea of being totally immersed in a game is going to be very attractive.

There are two big players in the augmented reality market—Microsoft’s HoloLens and the Google-backed Magic Leap. These devices don’t have a defined target release date yet. But the promise for augmented reality is huge. These devices are being touted as perhaps the successor to the smartphone and as such have a huge market potential. This list of potential applications for an augmented reality device is mind boggling large, which must be what attracted Google to buy into Magic Leap.

The MagicLeap works by beaming images directly into a user’s retinas and the strength and intensity of the beam can create the illusion of 3D. But as with Google Glass, a user is also going to be able to see the real world behind the image. This opens up a huge array of possibilities that range from gaming, where the device takes over a large share of the visual space, to the same sorts of communicative and informative functions done by Google Glass.

The big hurdles for augmented reality are how to power the device as well as overcoming the social stigma around wearing a computer in public—who can forget the social stigma that instantly accrued to glassholes, those who wore Google Glass into bars and other public places? As a device it must be small, low power, inconspicuous to use, and still deliver an amazing visual experience to users. It’s probably going to take a while to work out those issues.

The two kinds of devices will compete with each other to some extent on the fringes of the gaming community, and perhaps in areas like providing virtual tours of other places. But for the most part the functions they perform and the markets they chase will be very different.

The Latest on Malware

HeartbleedCisco has identified a new kind of malware that takes steps to evade being cleansed from systems. The example they provide is the Rombertik malware. This is one of a new form of malware that actively fights against being detected and removed from devices.

Rombertik acts much like a normal virus in its ability to infect machines. For example, once embedded in one machine in a network it will send phishing emails to others to infect other machines and uses other typical malware behavior. But what is special about Rombertik and other new malware is how hard they fight to stay in the system. For example, the virus contains a false-data generator to overwhelm analysis tools, contains tools that can detect and evade a sandbox (a common way to trap and disarm malware), and has a self-destruct mechanism that can kill the infected machine by wiping out the master boot record.

The problem with this new family of malware is that it evades the normal methods of detection. Typical malware detection tools look for telltale signs that a given website, file, or app contains malware. But this new malware is specifically designed to either hide the normal telltale signs, or else to morph into something else when detected. So as this new malware is detected, by the time you try to eradicate it in its original location it has moved somewhere else.

This new discovery is typical of the ongoing cat and mouse game between hackers and malware security companies. The hackers always get a leg up when they come out with something new and they generally can go undetected until somebody finally figures out what they are up to.

This whole process is described well in two reports issued by web security companies. Menlo Security reports that there was 317 million pieces of malware produced in 2014 in their State of the Web 2015: Vulnerability Report. In this report they question if the security industry is really ready to handle new kinds of attacks.

The report says that enterprises spent more than $70 billion on cybersecurity tools in 2014 but still lost nearly $400 billion as a result of cybercrime. They report that the two biggest sources of malware in large businesses come either through web browsing or from email – two things that are nearly impossible to eliminate from corporate life.

Menlo scanned the Alexa top one million web sites (those getting the most traffic) and found the following:

  • 34% of web sites were classified as risky due to running software that is known to be vulnerable to hacking.
  • 6% of websites were found to be serving malware, spam, or are part of a botnet.

The other recent report on web vulnerabilities came from Symantec, which can be downloaded here. Symantec said that hackers no longer need to break down the doors of corporate networks when the keys to hack them are readily available. That mirrors the comments by Menlo Security and is referring to the fact that companies operate software with known vulnerabilities and then take a long time to react when security breaches are announced.

The report says that in 2014 firms took an average of 50 days to implement security patches. Hackers are launching new kinds of malware and then leaping on the vulnerability before patches are in place. The biggest example of this in 2014 was the Heartbleed malware, where hackers were widely using it within 4 hours of it hitting the web while companies took a very long time to come up with a defense. Symantec says there were 24 separate zero-day attacks in 2014 – meaning an introduction of a new kind of malware that was either undetectable or for which there was no immediate defense.

Symantec reports much the same thing as Menlo Security in that the big vulnerability of malware is what it can do once it is inside of a network. The first piece of malware can hit a network in many different ways, but once there uses a number of sophisticated tools to spread throughout the network.

There is certainly nothing foolproof you can do to keep malware out of your corporate systems. But most of the ways that networks get infected are not through hackers, but though employees. Employees still routinely open spam emails and attachments and respond to phishing emails – so making sure you employees know more about malware and it’s huge negative impact might be your best defense.

Broadband CPNI

FCC_New_LogoThe FCC said before they passed the net neutrality rules that they were going to very lightly regulate broadband providers using Title II. And now, just a few weeks after the new net neutrality rules are in place, we already see the FCC wading into broadband CPNI (customer proprietary network information).

CPNI rules have been around for a few decades in the telephony world. These rules play a dual purpose of providing customer confidentiality (meaning that phone companies aren’t supposed to do things like sell lists of their customers). They also provide protection of customer calling information by requiring a customer’s explicit permission to use their data. Of course, we have to wonder if these rules ever had any teeth at all since the large telcos shared everything they had with the NSA. But I guess that is a different topic and it’s obvious that the Patriot Act trumps FCC rules.

The CPNI rules for telephone service are empowered by Section 222 of Title II. It turns out that this is one of the sections of Title II for which the FCC didn’t choose to forebear for broadband, and so now the FCC has opened an investigation into whether they should apply the same, or similar, rules for broadband customers.

It probably is necessary for them to do this, because once Title II went into effect for broadband this gave authority in this area to the FCC. Until now, customer protection for broadband has been under the jurisdiction of the Federal Trade Commission.

There clearly is some cost for complying with CPNI rules, and those costs are not insignificant, especially for smaller carriers. Today any company that sells voice service must maintain, and file with the FCC, a manual showing how they comply with CPNI rules. Further, they have to periodically show that their staff has been trained to protect customer data. If the FCC applies the same rules to ISPs, then every ISPs that sells data services is going to incur similar costs.

But one has to wonder if the FCC is going to go further with protecting customer data. In the telephone world usually the only information the carriers save is a record of long distance calls made from and to a given telephone number. Most phone companies don’t track local calls made or received. I also don’t know of any telcos that record the contents of calls, except in those circumstances when a law enforcement subpoena asks them to do so.

But ISPs know everything a customer does in the data world. They know every web site you have visited, every email you have written, everything that you do on line. They certainly know more about you than any other party on the web. And so the ISPs have possession of data about customers that most people would not want shared with anybody else. One might think that in the area of protecting customer confidentiality the FCC might make it illegal for an ISP to share this data with anybody else, or perhaps only allow sharing if a customer gives explicit permission.

I have no idea if the larger telcos use or sell this data today. There is nothing currently stopping them from doing so, but I can’t ever recall hearing of companies like Comcast or AT&T selling raw customer data or even metadata. But it’s unnerving to think that they can, and so I personally hope that the FCC CPNI rules explicitly prohibit ISPs from using our data. I further hope that if they need a customer’s permission to use their data that this is not one of those things that can be buried on page 12 of the terms of service you are required to approve in order to use your data service.

What would be even more interesting is if the FCC takes this one step further and doesn’t allow any web company to use your data without getting explicit permission to do so. I don’t have idea if they even have that authority, but it sure would be a huge shock to the industry if they tried to impose it.

The Law of Accelerating Returns

exponential-growth-graph-1Ray Kurzweil, the chief engineer at Google, was hired because of his history of predicting the future of technology. According to Kurzweil, his predictions are common sense once one understands what he calls the Law of Accelerating Returns. That law simply says that information technology follows a predictable and exponential trajectory.

This is demonstrated elegantly by Moore’s Law, in which Intel cofounder Gordon Moore predicted in the mid-60s that the number of transistors incorporated in a chip will double every 24 months. His prediction has held true since then.

But this idea doesn’t stop with Moore’s Law. The Law of Accelerating Returns says that this same phenomenon holds true for anything related to information technology and computers. In the ISP world we see evidence of exponential growth everywhere. For example, most ISPs have seen the the amount of data downloaded by the average household double every four years, stretching back to the dial-up days.

What I find somewhat amazing is that a lot of people the telecom industry, and certainly some of our regulators, think linearly while the industry they are working in is progressing exponentially. You can see evidence of this everywhere.

As an example, I see engineers designing new networks to handle today’s network demands ‘plus a little more for growth’. In doing so they almost automatically undersize the network capacity because they don’t grasp the multiplicative effect of exponential growth. If data demand is doubling every four years, and if you buy electronics that you expect to last for ten to twelve years, then you need to design for roughly eight times the data that the network is carrying today. Yet that much future demand just somehow feels intuitively wrong and so the typical engineer will design for something smaller than that.

We certainly see this with policy makers. The FCC recently set the new definition of broadband at 25 Mbps. When I look around at the demand in the world today at how households use broadband services, this feels about right. But at the same time, the FCC has agreed to pour billions of dollars through the Connect America Fund to assist the largest telcos in upgrading their rural DSL to 15 Mbps. Not only is that speed not even as fast as today’s definition of broadband, but the telcos have up to seven years to deploy the upgraded technology, during which time the broadband needs of the customers this is intended for will have increased to four times higher than today’s needs. And likely, once the subsidy stops the telcos will say that they are finished upgrading and this will probably be the last broadband upgrade in those areas for another twenty years, at which point the average household’s broadband needs will be 32 times higher than today.

People see evidence of exponential growth all of the time without it registering as such. Take the example of our cellphones. The broadband and computing power demands expected from our cellphones is growing so quickly today that a two-year-old cellphone starts to feel totally inadequate. A lot of people view this as their phone wearing out. But the phones are not deteriorating in two years and instead, we all download new and bigger apps and we are always asking our phones to work harder.

I laud Google and a few others for pushing the idea of gigabit networks. This concept says that we should leap over the exponential curve and build a network today that is already future-proofed. I see networks all over the country that have the capacity to provide much faster speeds than are being sold to customers. I still see cable company networks with tons of customers still sitting at 3 Mbps to 6 Mbps as the basic download speed and fiber networks with customers being sold 10 Mbps to 20 Mbps products. And I have to ask: why?

If the customer demand for broadband is growing exponentially, then the smart carrier will increase speeds to keep up with customer demand. I talk to a lot of carriers who think that it’s fundamentally a mistake to ‘give’ people more broadband speed without charging them more. That is linear thinking in an exponential world. The larger carriers seem to finally be getting this. It wasn’t too many years ago when the CEO of Comcast said that they were only giving people as much broadband speed as they needed, as an excuse for why the company had slow basic data speeds on their networks. But today I see Comcast, Verizon, and a number of other large ISPs increasing speeds across the board as a way to keep customers happy with their product.

How’s Your Strategic Plan?

parker_chess_set_burnt_boxwood_wood_burnt_boxwood_pieces_1000I help companies develop strategic plans., and one thing that I often find is that people think that strategic planning is the process of developing goals for their company. The first thing I have to point out to them is that having goals is great and you need them, but goals are not a strategic plan.

Having goals are an essential first step for looking into the future because they define your ultimate vision of where you want your company to go. Goals can be almost anything from increased profits, better sales, improved customer service, eliminating a network shortcoming, etc. But if you are going to try to reasonably achieve your goals you need to turn them into both a strategic plan and a tactical plan.

A strategic plan is basically a way to rate and rank your goals and turn them into an action plan. Not everybody goes about this in the same way, but a normal first step is to assess the resources you have available to achieve each of your goals. Almost every company has two primary resources that are limiting factors – cash and manpower. So it’s vital that you somehow determine how much of your scarce resources are needed to achieve each goal on your list.

This is harder than it sounds. Let’s say you have listed five goals. For each of them you want to do the following:

  • The first step is to rank your goals by importance. For example, you may have a few goals that are of top importance (like fixing a problem that is causing network outages or improving margins) while you will have other goals that are less important – at least for now. Theses perhaps the hardest part of the process because it is going to make you choose among your many goals and decide which ones are of the most importance to the business.
  • Once you have a prioritized list of goals, then the next step is to come up with a list of specific tasks necessary to achieve each goal. Be realistic and explicit in this determination. For instance, if you want to increase sales to businesses, then figure out what you think it takes to make it happen. Is that going to require more cash in the form of hiring additional sales staff or paying higher commissions? Will it take more human resources – are there key people in your organization that need to spend time to make the goal happen?
  • There is often more than one reasonable path to achieve a goal and so you also must explore the most likely alternatives paths to help determine which one that is right for you. This exploration is critical at this stage, because if you only consider one solution you will have locked yourself into a rigidly-defined path without flexibility. So spend some time brainstorming about the best ways to achieve each goal and don’t be afraid to consider multiple solutions.
  • Once you have assessed the reasonable ways to achieve each goal, you are then ready to start getting strategic. Very few organizations have enough resources to pursue all of their goals at the same time, and so you need to determine which of the possible solutions to various goals you are going to pursue. This is where you have to get realistic about what can be accomplished within the time frame of your strategic plan. For example, if you have a fixed cash budget for the following year, you obviously can’t pursue plans that cost more than you can afford. And the same with people. If achieving your goals is going to draw too much time from key people, you need to get realistic about how much can be accomplished by the resources you have. This step requires making a realistic ‘budget’ for achieving your goals in terms of your cash and key manpower limitations. I have seen strategic plans that assumed that a few key staffers would spend all of their time on the new projects, and in doing so would ignore their current workload – and such a plan is going to fail due to lack of people resources.
  • The way I like to do this process is much the same way that many people do a family budget. You start with the amount of resources available to ‘spend’, be that cash or key staff time, and then work backwards through the goals, considering the most important ones first, to see which you can afford to pursue. This can become hard because you will often end up end having to scrap some goals that are not ‘affordable’ and so this process often means making tough choices.
  • You want to make sure that the final strategy you choose will choose goals that you can achieve in a reasonable amount of time. I’ve found that you are almost always better off by putting all of your effort into completing a few goals versus only making partial progress towards achieving many goals. You want your organization to have wins and to see progress and the best way to do this is to get the top goals on your list behind you before you tackle your next strategic plan.

The final strategic plan will end up as a list of the goals that you think you can achieve during the strategic planning time frame (should only be a few years at most). From there you are then ready to develop a tactical plan. This means establishing a very specific set of assignments, timelines, and budgets to make sure that the goals you’ve chosen can be implemented. It’s no good creating a strategic plan if you don’t take this extra step to make sure that that plan gets implemented. There are very specific ways to make sure that a tactical plan stays on schedule and on budget – but that is the topic of another blog.

If the above process sounds too challenging to tackle, then don’t hesitate to bring in outside help to facilitate the process. Often, after going through the strategic planning process a few times, businesses eventually don’t need outside help. But learning how to be strategic is like learning anything else; you will find techniques that work for your company, and once you learn the discipline of thinking strategically, you will start to see your goals come to fruition – an outcome that every company wants.

The Cost of International Calling

palm-trees2We have gotten so used to the cost of long distance calls dropping in the US that many people don’t realize that it is still very expensive to call some other places in the world.

In the US we are now used to unlimited long distance plans, and so most of us don’t think about the cost of long distance. We all still pay for it—for example, that’s one of the costs built into your cellphone bill. I imagine that there are younger people who have no appreciation that we were once very careful about making long distance calls.

I remember in the early 80s when AT&T announced a ‘reduced’ long distance plan that had a flat rate of 12 cents per minute. Before that plan, costs varied by distance called and it was not unusual to call some places in the US that were as much as 50 cents per minute. Long distance rates also varied by time of day and people would wait until midnight to call relatives to get the nighttime rates.

But over the years the FCC has deliberately taken steps to reduce long distance rates since they figured that might be the one thing they could do that would most boost the US economy. And it worked.

At the same time that the US made a deliberate effort to reduce costs many other countries did the same. Thirty years ago it was almost universally expensive to call other countries. Part of this was due to lack of facilities; there were only a few trans-oceanic cables that were capable of carrying voice – and they were generally full all of the time with calls. But today it’s almost as cheap to call places like Canada and a lot of Europe as it is to call in the US. And there are now many calling plans that include a number of foreign countries.

But this is not true everywhere. There are still a lot of places around the world that are very expensive to call. The rates I quote are from Comcast’s latest international long distance rates, but the rates charged by others carriers are similar. Even today it costs $2.90 per minute to call Afghanistan. A few years ago that was over $5 per minute. Surprisingly, it’s less than half that rate at $1.20 per minute to call Antarctica.

It costs a lot more in general to call islands. Most of the Caribbean is between $0.40 and $1.20 per minute (although the US Virgin Islands are at US rates). The pacific islands in Micronesia are generally around $1 per minute.

In general there are two reasons why rates are so high in some places. For some islands, the cost of the calling reflects the expensive cost of the facilities needed to complete the calls. Such calls these days are often completed over satellite since there are still places not connected to the world by undersea fibers. But the other big cost component is government tariff rates, charged as a moneymaker for the local governments. This is why you see calls to North Korea costing $3.28 per minute, calls to Laos costing $2.43, and calls to Myanmar costing $2.17.

In most cases these expensive rates are bypassed using voice over IP across the Internet, and so people that live in places with expensive rates usually bypass those costs and use the Internet to talk to family overseas. In many countries that is a risk and you can be prosecuted for bypassing the tariff rates. I remember when VoIP was new there were entrepreneurs in Jamaica who set up calling over the Internet and then dumped the calls into the local network. It seemed that the Jamaican government would arrest a few VoIP vendors every week, but new ones always sprung up to take their places. Now only the most repressive countries still try to police this while most have bowed to the reality of VoIP.

I remember working with many clients in the 70s and 80s and one thing I always looked at was their long distance revenues. Even the smallest telcos would have a few residential customers that made over $1,000 per month in long distance calls and many others who spent hundreds of dollars per month. I remember when parents would groan if one of their kids got a boyfriend or girlfriend who was long distance. We’ve come a long way from those days, and unless you have a reason to call a handful of expensive countries or islands a lot, long distance is now one of those things that you don’t give a second thought about.

What’s the Future for Media Advertising?

Film4I’m glad I’m not in the advertising business. We think telecom is undergoing big changes, but the advertising firms that represent large clients must be struggling to know where to find the eyeballs to view their ads. The public’s traditional viewing habits are changing quickly and dramatically across all forms of media.

Not many years ago ad revenues were spread across TV, radio, and print and the big companies had a pretty good idea who was seeing their ads by demographic. But the way that people view all forms of media is changing so rapidly that it’s a lot harder to know who is seeing your ads.

Consider the following statistics comparing how people spend their time viewing different media versus how advertising dollars were spent. Both sets of numbers are from 2014 and come from Business Insider.

‘                            % of Time Spent         % of Advertising Dollars

Digital                         46.3%                                28.2%

Television                   36.6%                                38.1%

Radio                          11.8%                                  8.6%

Print                             3.5%                                 17.6%

Digital includes the web, cellphones, and all forms of digital advertising.

These percentages show a interesting picture of how people are spending their time and I think this is the first time I have ever seen this expressed in a side-by-side comparison across all forms of media. It’s obvious that people prefer digital media and spend nearly half of their media time there.

The problem that advertisers have is that there are still huge amounts of change happening within each category. For example, it looks on the surface that the amount of advertising spent on television is about right according to the eyeball time purchased. But consider the following facts:

  • The demographics for television are changing dramatically and rapidly. For example, the percentage of households of 18–24 year olds that buy a cable subscription dropped 7 total percentage points (or 12% overall) just last year.
  • The percentage of people who watch TV on a time-delayed basis is up dramatically and over 40% of TV watching is now done on a delayed basis (using a DVR or video on demand), and these viewers largely skip the commercials.

This means that the demographic for those who watch television is aging rapidly, and even many of those who watch are doing so on a time-delayed basis and skipping the ads. This has to be a huge concern for advertisers.

But there are equal issues with web advertising. One of the fastest growing categories of web apps is for ad blocking, meaning that a huge number of people are now blocking ads from showing up on the pages loaded by their browsers/devices. Studies have shown that people are capable of ignoring web advertising compared to advertising on television or the radio. They can and do read news articles or other content without looking at or clicking on any of the ads.

And so an advertiser has a very tough choice to make. They can place ads on television with its rapidly-aging demographic and quickly-decreasing percentage of people who see the ads, or they can advertise on the web where people either block the ads or become good at ignoring them.

This is all evidence that technology has given the average person the ability to skip ads if they so choose. I know I have largely wiped ads out of my life. I can’t recall having watched an ad on television this year and I very rarely click on web ads. I used to be a voracious reader of magazines and I have not looked at a magazine this year. I read a local paper every day but I cannot name even one company that advertises in that paper. The one place where ads still get to me is on the radio that I always have on when I’m driving.

The problem with my behavior (and everybody else that ignores ads) is that advertising is what pays for a lot of the content we enjoy. If advertisers eventually bow to reality and cut back on TV and web advertising then a lot of the content we like will not be produced. It’s a real dilemma not only for the advertisers, but also for the television networks and web sites that rely on advertising to fund their content.

Are You Ready for Do-It-For-Me?

General-Ledger-TemplateThe majority of my clients are small businesses, and as such they spend an inordinate amount on software. They have software that they use for billing, accounting, payroll, benefits, taxes, sales, inventory/continuing property records, and scheduling. It’s expensive to buy the various software packages they need and the software is all complicated to learn and operate. And the software is generally not flexible and is hard to customize to provide what the company would like it to do. As small businesses they have to fit the software versus the software fitting them.

There is a new trend in software that might make it easier on small businesses. We are now seeing Do-It-For-Me (DIFM) services that combine cloud software platforms with specialized external labor to perform functions that many companies find costly and time-consuming

This idea of DIFM is gaining huge traction in the consumer world. We see millennials not buying cars and instead using Uber to get from place to place (cheaper than car ownership). There are now a ton of DIFM services on the web and you can hire somebody to temporarily help you with anything from weeding your garden to mailing packages for you. Now this concept is starting to spread to the business world.

The last revolution in software was the concept of buying only as much software as you need, or software-as-a-service (SaaS). There are now tons of software packages for businesses that you can pay for by the user and which don’t force you to make a huge upfront investment. But most of these software packages are still hard to learn and they don’t integrate into other software used by a business. So each SaaS program you buy is its own little silo separate from the rest of your business and which has the added drawback of normally having a steep learning curve. SaaS software can save a lot of money for a firm compared to buying a huge expensive package, but it doesn’t necessarily make life easier for employees or the business.

But Do-It-For-Me software aims to do just that – take the burden off your staff and let outside specialists take care of mundane tasks so your staff can focus on the important stuff. This idea has been around on a limited basis for years. For instance, there are huge, successful companies that handle payroll and all of the tax forms and employee deductions that companies hate keeping track of. In the telco world a lot of companies for years have sent their billing out to a service bureau who provides turnkey billing of customers.

There are now DIFM services for all sorts of software that offer to perform functions that most businesses hate doing. What these software platforms ask of a business is to supply them with the raw data they need, and then they do everything else. These new companies are staffed to be super customer-friendly making them easy to use.

There are a number of new start-ups in the DIFM arena and I expect many more as these companies find success. Some of the more interesting ones include:

  • Buzz360 This firm has automated the marketing process for smaller companies. They can manage your web site, your social media interfaces, and other interfaces with customers. They offer a variety of tools for communicating with customers and potential customers.
  • Bench offers a DIFM accounting service that eliminates the need for an in-house bookkeeper.
  • UpCounsel offers a way to use small-business attorneys on an as-needed basis.
  • Zenefits is interesting in that they give free Human Resources software to manage employee benefits and make their money from commissions on insurance.

Every firm has some functions that they hate to do. Such tasks either take valuable time away from other more important functions, or since they are hated they don’t get proper attention. You should definitely look around for alternatives, because there is probably somebody out there willing to take these kinds of tasks off your plate.

The Shift To Proprietary Hardware

OLYMPUS DIGITAL CAMERA

There is a trend in the industry that is not good for smaller carriers. More and more I see the big companies designing proprietary hardware just for themselves. While that is undoubtably good for the big companies, and I am sure that it saves them a lot of money, it is not good for anybody else.

I first started noticing this a few years ago with settop boxes. It used to be that Comcast and the other large cable companies used the same settop boxes as everybody else. And their buying power is so huge that it drove down the cost of the settop boxes for everybody in the industry. It was standard for large companies to put their own name tag on the front of the boxes, but for the most part they were the same boxes that everybody else could buy, from the same handful of manufacturers.

But then I started seeing news releases and stories indicating that the largest cable companies had developed proprietary settop boxes of their own. One driver for this change is that the carriers are choosing different ways to bring broadband to the settop box. Another change is that the big companies are adding different features, and are modifying the hardware to go along with custom software. Cable companies are even experimenting with very non-traditional settop box platforms like Roku or the various game consoles.

I see this same thing going on all over the industry. The cable modems and customer gateways that the large cable companies and the large telcos use are proprietary and designed just for them. I recently learned that the WiFi units that Comcast and other large cable companies are deploying outdoors are proprietary to them. Google has designed its own fiber-the-the-premise equipment. And many companies including Amazon, Facebook, Google, Microsoft, and others are designing their own proprietary routers to use in their cloud data centers.

In all of these cases (and many other that I haven’t listed here), the big companies used to buy off-the-shelf equipment. They might have had a slightly different version of some of the hardware, but not different enough that it made a difference to the manufacturers. Telco has always been an industry where only a handful of companies make any given kind of electronics. Generally, smaller companies bought from whichever vendors the big companies chose, since those vendors had the economy of scale.

But now the big carriers are not only using proprietary hardware, but a lot of them are getting it manufactured for themselves directly, without one of the big vendors in the middle. You can’t blame a large company for this; I am sure they save a lot of money by cutting Alcatel/Lucent, Cisco, and Motorola out of the supply chain. But this tendency is putting a hurt on these traditional vendors and making it harder for vendors to survive.

It’s going to get worse. Currently there is a huge push in many parts of the telecom business to use software-defined networking (SDN) to simplify field hardware and control everything from the cloud. Since the large carriers will shift to SDN networks long before smaller carriers, the big companies will be using very different gear at the edges of the network – and those are the parts of the network that cost the most.

This is a problem for smaller carriers since they often no longer benefit from being able to buy the same devices that the large companies buy to take advantage of their huge economy of scale. Over time this is going to mean the prices for the basic components smaller carriers buy are going to go up. And in the worst case there might not be any vendor that can make a business case for manufacturing a given component for the small carriers. One of the advantages of having healthy large manufacturers in the industry was that they could take a loss on some product lines as long as the whole suite of products they sold made a good profit. That will probably no longer be the case.

I hate to think about where this trend is going to take the industry in five to ten years, and I add it to the list of things that small carriers need to worry about.

What’s the Real Cost of Providing the Internet?

British-Union-Jack-FlagThere is an interesting conversation happening in England about the true cost of operating the Internet. As an island nation, all of the costs of operating the network must be borne by the whole country, and so every part of the Internet cost chain is being recognized and counted as a cost. That’s very different than the way we do it here.

There are two issues that are concerning British officials – power costs and network capacity. Reports are that operating the data centers and the electronics hubs needed to operate the Internet now consume 8% of all of the power produced in the country. And it’s growing rapidly. At the current rate of growth of Internet consumption it’s estimated that the power requirements needed for the Internet are doubling every four years.

Here in the US we don’t have as much of the same concern about power costs. First, we have hundreds of different power companies scattered across the country and we don’t produce electricity in the same places that we use the Internet. But second, in this country the large data centers are operated by the large billion-dollar companies like Amazon, Google, and Facebook who can afford to pay the electric bills, mostly due to advertising revenues. But in a country like England, that sort of drain on electricity capacity must be borne by all electric rate payers when the whole grid hits capacity and must somehow be upgraded.

And it’s going to get a lot worse. If the pace of power consumption needed for broadband doesn’t somehow slow down, then by 2035 the Internet will be using all of the power produced in the British Isles today. It’s not likely that the power needs will grow quite that fast. For example, there are far more power-efficient routers and switches being made for data centers that are going to knock the power demand curve down a notch, but there is no reason to think that the demand for Internet usage is going to stop growing anytime soon.

In Britain they are also worried about the cost of maintaining the network. They say that the bulk of their electronics need to be upgraded in the next few years. In the industry we always talk about fiber being a really long-term investment, and the fiber is so good today that we really don’t know how long it’s going to last – 50 years, 75 years, longer? But that is not true for the electronics. Those electronics have to be replaced every 7 to 10 years and that can be expensive.

In this country all of the companies and cities that were early adopters of FTTP technology used BPON – the first Fiber-to-the-premise technology. This technology was the best thing at the time and was far faster than cable modems – but that is no longer the case. BPON is limited in two major ways. First, as happens with many technologies, the manufacturers all stopped supporting BPON. That means it’s hard to buy replacement parts and a BPON network is at major risk of failure if one of the larger core components of the network dies.

BPON is also different enough from newer technologies that the new replacements, like GPON, are not backwards compatible. This means that in order to upgrade to a newer version of fiber technology every electronic component in the network from the core to the ONTs on customer premises must be replaced, making upgrades very costly. Even the way BPON is strung to homes is different, meaning that there is fiber field work needed to upgrade it. We have hopefully gotten smarter lately; a lot of fiber electronics today are being designed to still work with later generations of equipment.

This is what happened in England. The country’s telecoms were early adopters of fiber and so the electronics throughout the country are already aged and running out of capacity. I saw a British article where the author was worried that the networks were getting ‘full’ and that more fiber would have to be built. The author didn’t recognize that upgrading electronics instead can use existing fiber to deliver a lot more data.

England is one of the wealthier nations on the global scale and one has to be concerned about how the poorer parts of the world are going to deal with these issues. As we introduce the Internet into Africa and other poorer nations one has to ask how a poor country that already has trouble generating enough electricity is going to be able to handle the demand caused by the Internet? And how will poorer nations keep up with the constant upgrades needed to keep the networks operating?

Perhaps I am worrying about nothing and maybe we will finally see the cheap fusion reactors that have been just over the horizon since I was a teenager. But when a country like England talks about the possible need to ration Internet usage, or to somehow meter it so that big users pay a lot more, one has to be concerned. In our country the big ISPs always complain about profits, but they are wildly profitable. The US and a few other nations are very spoiled and we can take the continued growth of the Internet for granted. Much of the rest of the world, however, is going to have a terrible time keeping up, and that is not good for mankind as a whole.