Shrinking Competition

1854_gold_dollar_obvI bet that the average person thinks that telecom competition is increasing in the country. There are so many news releases talking about new and faster broadband that people probably thinks broadband is getting better everywhere. The news releases might mention Google Fiber or talk about 4G or 5G data and infer that competition is increasing in most places across the country. But I travel a lot and I am pretty certain that in most markets broadband competition is shrinking.

There are a few places getting new fiber. Google has built a few cities. CenturyLink has woken up from the long sleep of Quest and is building some fiber in some markets. And there are a handful of municipalities and other companies building fiber in some markets. This is bringing faster broadband to some cities, or more accurately to some neighborhoods in some cities since almost nobody is building fiber to an entire metro market. But it’s hard to say that this fiber is bringing price competition. Google has priced their gigabit fiber at $70 per month and everybody else is charging the same or more. And these big bandwidth products are only intended for the top third of the market – they are cherry picking products. Cities that are getting fiber are mostly not seeing price competition, particularly for the bottom 2/3 of the market.

But in most markets in the US the cable companies have won the broadband battle. I’ve seen a surveys from a few markets that show that DSL penetration is as low as 10% – and even then at the lower speeds and prices in most markets – and the cable companies serve everybody else.

It seems the two biggest telcos are headed down the path to eventually get out of the landline business. Verizon stopped building new FiOS and has now sold off some significant chunks of FiOS customers. It’s not hard to imagine that the day will come over the next decade when they will just quietly bow out of the landline business. It’s clear when reading their annual report that the landline business is nothing more than an afterthought for them. I’ve read rumors that AT&T is considering getting out of the U-Verse business. And they’ve made it clear that they want completely out of the copper business in most markets. And so you are also likely to see them start slipping out of the wireline business over time.

I can’t tell you how many people I meet who are convinced that wireless cellular data is already a competitor of landline data. It is not a competitor for many reasons. One primary reason is physics; for a wireless network in a metropolitan area to be able to deliver the kind of bandwidth that can be delivered on landlines would require fiber up and down every street to feed the many required cell sites. But it’s also never going to be a competitor due to the draconian pricing structure of cellular data. It’s not hard to find families who download more than a 100 gigabits during a month and with Verizon or AT&T wireless that much usage would probably cost $1,000 per month. Those two GIANT companies are counting on landline-based WiFi everywhere to give their products a free ride and they do not envision cellular data supplanting landlines.

Broadband customer service from the large companies has gone to hell. The large cable companies and telcos are among the worst at customer service when measured against all industries. This might be the best evidence of the lack of competition – because the big carriers don’t feel like they have to spend the money to be good. Most customers have very few options but to buy from one of the behemoths.

We were supposed to heading towards a world where the big telcos built fiber and got into the cable business to provide a true competitor to the cable companies. A decade ago the common consensus was that the competition between AT&T and Time Warner and between Verizon and Comcast was going to keep prices low, improve customer service, and offer real choices for people. But that has never materialized.

Instead what we have are the cable companies dominating landline broadband and the two largest telcos controlling the wireless business. Other competition at this point is not much more than a nuisance to both sets of companies. We see prices on broadband rising while broadband speeds in non-competitive markets are stagnating. And, most unbelievable to me, we’ve seen the US population replace a $25/month landline that sufficed for the family with cellphones that cost $50 or more for each family member. I can’t recall anybody predicting that years ago. It kind of makes a shambles of fifty years worth of severe telephone regulation that used to fight against telcos raising rates a dollar or two.

So I contend that overall competition in the country is shrinking, and if Verizon and AT&T get out of the landline business it will almost disappear in most markets. Even where we are seeing gigabit networks, the competition is with speed and not with price. People are paying more for telecom products than we did years ago, and price increases are outstripping inflation. Make no mistake – if I could get a gigabit connection I would buy it – but giving the upper end of the market the ability to spend more without giving the whole market the option to spend less is not competition – it’s cherry picking.

Li-Fi

Light bulbThere is another new technology that you might be hearing about soon. It’s called Li-Fi and also goes by the name of Visible Light Communications (VLC) or Optical WLAN. This technology uses light as a source of data transmission, mostly within a room, and will compete with WiFi and other short-distance transmission technologies.

Early research into the technology used fluorescent lamps and achieved data speeds of only a few Kbps. The trials got more speed after the introduction of LED lighting, but the technology didn’t really take off until professor Harold Haas of the University of Edinburgh created a device in 2011 that could transmit at 10 Mbps. Haas calculated the theoretical maximum speed of the technology at the time at 500 Mbps, but recent research suggests that the maximum speeds might be as fast someday as 1.5 Gbps.

There are some obvious advantages of the technology

  • Visible light is totally safe to people and eliminates any radiation issues involved in competitors like 802.11ad.
  • It’s incredibly difficult to intercept and eavesdrop on Li-Fi transmissions that stay within a room between the transmitter and receiver.
  • It’s low power, meaning it won’t drain batteries, and uses relatively simple electronics.

But there are drawbacks as well:

  • The speed of the transmission is going to be limited to the data pipe that feeds it. Since it’s unlikely that there will ever be fiber built to lightbulbs, then Li-Fi is likely to be fed by broadband over powerline, which currently has a maximum theoretical speed of something less than 1 Gbps and a practical speed a lot less.
  • At any reasonable speed Li-Fi needs a direct line-of-sight. Even within a room, if anything comes between the transmitter and the receiver the transmission stops. Literally waving a hand into the light bean will stop transmission. This makes it awkward to use it for almost any mobile devices or something like a virtual reality headset.

There are a few specific uses considered for the Li-Fi technology.

  • This possibly has more uses in an industrial setting where data could be shared between computers, machines, and robots in such a way as to insure that the light path doesn’t get broken.
  • The primary contemplated use of the technology is to send large amounts of data between computers and data devices. For example, Li-Fi could be used to transmit a downloaded movie from your computer to a settop box. This could be a convenient, interference-free way to move data between computers, phones, game consoles, and smart TVs.
  • It can be used at places like public libraries to download books, videos, or other large files to users without having them log onto low-security WiFi networks. It would also be great as a way to hand out eCatalogs and other data files in convention centers and other places where wireless technologies often get bogged down due to user density.
  • Another use is being called CamCom. It would be possible to build Li-Fi into every advertising display at a store and let the nearest light bulb transmit information about the product to shoppers along with specials and coupons. This could be done through an app much more quickly than using QR codes.

The biggest hindrance to the technology today is the state of LEDs. But Haas has been leading researchers from the Universities of ­Cambridge, Oxford, St. Andrews, and Strathclyde in work to improve LEDs specifically for the purposes of Li-Fi. They have created a better LED that provides almost 4 Gbps operating on just 5 milliwatts of optical output power. These kinds of speeds can only go a very short distance (inches), but they hope that through the use of lenses that they will be able to transmit 1.1 Gbps for up to 10 meters.

They are also investigating the use of avalanche photodiodes to create better receivers. An avalanche photodiode works by creating a cascade of electrons whenever it’s hit with a photon. This makes it much easier to detect transmitted data and to cut down on packet loss.

It’s likely at some point within the next few years that we’ll see some market use of the Li-Fi technology. The biggest market hurdle for this and every other short-range transmission technology to overcome is to convince device makers like cellphone companies to build the technology into their devices. This is one of those chicken and egg situations that we often see with new technologies in that it can’t be sold to those who would deploy it, like a store or a library, until the devices that can use it are on the market. Unfortunately for the makers of Li-Fi equipment, the real estate on cellphone chips and other similar devices is already very tightly packed and it is going to take a heck of a sales job to convince cellphone makers that the technology is needed.

Are We Expecting too Much from WiFi?

Wi-FiI don’t think that a week goes by when I don’t see somebody proposing a new use for WiFi. This leads me to ask if we are starting to ask too much from WiFi, at least in urban areas.

Like all spectrum, WiFi is subject to interference. Most licensed spectrum has strict rules against interference and there are generally very specific rules about how to handle contention if somebody is interfering with a licensed spectrum-holder. But WiFi is the wild west of spectrum and it’s assumed there is going to be interference between users. There is no recourse to such interference – it’s fully expected that every user has an equal right to the spectrum and everybody has to live with the consequences.

I look at all of the different uses for WiFi and it’s not too hard to foresee problems developing in real world deployments. Consider some of the following:

  • Just about every home broadband connection now uses WiFi as the way to distribute data around the house between devices.
  • Comcast has designed their home routers to have a second public transmitter in addition to the home network, so these routers initiate two WiFi networks at the same time.
  • There is a lot of commercial outdoor WiFi being built that can bleed over into home networks. For example, Comcast has installed several million hotspots that act to provide convenient connections outside for their landline data customers.
  • Many cities are contemplating building citywide WiFi networks that will provide WiFi for their citizens. There are numerous network deployments by cities, but over the next few years I think we will start seeing the first citywide WiFi networks.
  • Cable companies and other carriers are starting to replace the wires to feed TVs with WiFi. And TVs require a continuous data stream when they are being used.
  • Virtual reality headsets are likely to use WiFi to feed the VR headsets. There are already game consoles using WiFi to connect to the network.
  • There is a new technology that will use WiFi to generate the power for small devices like cellphones. For this technology to be effective the WiFi has to beam continuously.
  • And while not big bandwidth user at this point, a lot of IoT devices are going to count on WiFi to connect to the network.

On top of all of these uses, the NCTA sent a memo to the FCC on June 11 that warned of possible interference with WiFi spectrum from outside through the LTE-U or LAA spectrum used for cellphones. Outside interference is always possible, and in a spectrum that is supposed to have interference this might be hard to detect or notice for the average user. There is generally nobody monitoring the WiFi spectrums for interference in the same ways that wireless carriers monitor their licensed spectrum.

All of these various uses of the spectrum raise several different concerns:

  • One concern is just plain interference – if you cram too many different WiFi networks into one area, each trying to grab the spectrum, you run into traditional radio interference which cuts down on the effectiveness of the spectrum.
  • WiFi has an interesting way of using spectrum. It is a good spectrum for sharing applications, but that is also its weakness. When there are multiple networks trying to grab the WiFi signal, and multiple user streams within those networks, each gets a ‘fair’ portion of the spectrum which is going to somehow be decided by the various devices and networks. This is a good thing in that it means that a lot of simultaneous streams can happen at the same time on WiFi, but it also means that under a busy load the spectrum gets chopped into tiny little steams that can be too small to use. Anybody who has tried to use WiFi in a busy hotel knows what that’s like.
  • All WiFi is channelized, or broken down into channels instead of being one large black of spectrum. The new 802.11ac that is being deployed has only two 160 MHz channels and once those are full with a big bandwidth draw, say a virtual reality headset, then there won’t be room for a second large bandwidth application. So forget using more than one VR headset at the same time, or in general trying to run more than one large bandwidth-demanding application.

It’s going to be interesting to see what happens if these problems manifest in homes and businesses. I am imagining a lot of finger-pointing between the various WiFi device companies – when the real problem will be plain old physics.

The Power of Why

whyI had a conversation with a friend the other day that reminded me of some advice that I have given for a long time. My friend is developing a new kind of software and his coders and programmers are constantly telling him that they can’t solve a particular coding issue. He drives them crazy because any time they tell him they can’t do something, he expects them to be able to tell him why it won’t work. They generally can’t immediately answer this question and so they have to go back and figure out why it can’t be done.

I laughed when he told me this, because it’s something I have been telling company owners to do for years and I might even have been the one to tell him to do this many years ago. When somebody tells you that something can’t be done, you need to make them tell you why. Over the years I have found asking that simple question to be one of the more powerful management tools you can use.

So what is the value in knowing why something doesn’t work? I’ve always found a number of reasons for using this tool:

  • It helps to turn your staff into critical thinkers, because if they know that they are always going to have to explain why something you want won’t work, then they will learn to ask themselves that question before they come and tell you no.
  • And that is important because often, when examining the issue closer, they will find out that perhaps the answer really isn’t no and that there might be another solution they haven’t tried. So making somebody prove that something won’t work often leads to a path to make it work after all.
  • But even if it turns out that the answer is no, then looking closely at why a given solution to a problem wouldn’t work will often let you find another solution, or even a partial solution to your problem. I find that thinking a problem the whole way through is a useful exercise even when it doesn’t produce a solution.
  • This makes better employees, because it forces them to better understand whatever they are working on.

Let me give a simple example of how this might work. Let’s say you ask one of your technicians to set up some kind of special routing for a customer and they come back and tell you that it can’t be done. That first response, that it won’t work, doesn’t give you any usable feedback. If you take it at face value then you are going to have to tell your customer they can’t have what they are asking for. But when you send that technician back to find out why it won’t work, there are a wide range of possible answers that might come back. It may turn out upon pressing them that the technician just doesn’t know how to make it work – which means that they need to seek help from another resource. They might tell you that the technical manual for the router you are using says it won’t work, which is not an acceptable answer unless technical support at the router company can tell you why. They may tell you that you don’t own all of the software or hardware tools needed to make it work – and now you can decide if obtaining those tools makes sense for the application you have in mind. You get the point: understanding why something doesn’t work often will lead you to one or more solutions.

My whole consulting practice revolves around finding ways to make things work. My firm gets questions every day about things clients can’t figure out on their own. We never automatically say that something can’t be done, and for the vast majority of the hard questions we are asked we find a solution. The solution we find may not always be what they want to hear, because the solution might be too expensive or for some other reason won’t fit their needs, but they usually happy to learn all of the facts.

Give this a try. It’s really easy to ask why something won’t work. But the first few times you do this you are going to get a lot of blank stares from your staff if they have not been asked this question many times before. But if this becomes one of the tools in your management toolbox, then I predict you are going to find out that a lot of the unsolvable problems your staff has identified are solvable after all. That’s what I’ve always found. Just don’t do this so well that nobody ever calls us with the hard questions!

Augmented vs. Virtual Reality

Escher-6We are about to see the introduction of the new generation of virtual reality machines on the market. Not far behind them will probably be a number of augmented reality devices. These devices are something that network operators should keep an eye on, because they are the next generation of devices that are going to be asking for significant bandwidth.

The term ‘augmented reality’ has been around since the early 1990s and is used to describe any technology that overlays a digital interface over the physical world. Until now, augmented reality has involved projecting opaque holograms to blend into what people see in the real world. Virtual reality takes a very different approach and immerses a person in a fully digital world by projecting stereoscopic 3D images onto a screen in front of your eyes.

A number of virtual reality headsets are going to hit the market late this year into next year:

  • HTC Vive is hoping to hit the market by Christmas of this year. This is being developed in conjunction with Valve. This device will be a VR headset that will incorporate some augmented reality, which will allow a user to move and interact with virtual objects.
  • Oculus Rift, owned by Facebook, is perhaps the most anticipated release and is expected to hit the market sometime in 2016.
  • Sony is planning on releasing Project Morpheus in 1Q 2016. This device will be the first VR device integrated into an existing game console.
  • Samsung will be releasing its Gear VR sometime in 2016. This device is unique in that it’s powered by the Samsung Galaxy smartphone.
  • Raser will be releasing a VR headset based upon open source software that they hope will allow for more content delivery. Dates for market delivery are still not known.

All of these first generation virtual reality devices are for gaming and, at least in the first few generations, that will be the primary use for these devices. Like with any new technology, price is going to be an issue for the first generation devices, but one has to imagine that within a few years these devices might be as common as, or even displace, traditional game consoles. The idea of being totally immersed in a game is going to be very attractive.

There are two big players in the augmented reality market—Microsoft’s HoloLens and the Google-backed Magic Leap. These devices don’t have a defined target release date yet. But the promise for augmented reality is huge. These devices are being touted as perhaps the successor to the smartphone and as such have a huge market potential. This list of potential applications for an augmented reality device is mind boggling large, which must be what attracted Google to buy into Magic Leap.

The MagicLeap works by beaming images directly into a user’s retinas and the strength and intensity of the beam can create the illusion of 3D. But as with Google Glass, a user is also going to be able to see the real world behind the image. This opens up a huge array of possibilities that range from gaming, where the device takes over a large share of the visual space, to the same sorts of communicative and informative functions done by Google Glass.

The big hurdles for augmented reality are how to power the device as well as overcoming the social stigma around wearing a computer in public—who can forget the social stigma that instantly accrued to glassholes, those who wore Google Glass into bars and other public places? As a device it must be small, low power, inconspicuous to use, and still deliver an amazing visual experience to users. It’s probably going to take a while to work out those issues.

The two kinds of devices will compete with each other to some extent on the fringes of the gaming community, and perhaps in areas like providing virtual tours of other places. But for the most part the functions they perform and the markets they chase will be very different.

The Latest on Malware

HeartbleedCisco has identified a new kind of malware that takes steps to evade being cleansed from systems. The example they provide is the Rombertik malware. This is one of a new form of malware that actively fights against being detected and removed from devices.

Rombertik acts much like a normal virus in its ability to infect machines. For example, once embedded in one machine in a network it will send phishing emails to others to infect other machines and uses other typical malware behavior. But what is special about Rombertik and other new malware is how hard they fight to stay in the system. For example, the virus contains a false-data generator to overwhelm analysis tools, contains tools that can detect and evade a sandbox (a common way to trap and disarm malware), and has a self-destruct mechanism that can kill the infected machine by wiping out the master boot record.

The problem with this new family of malware is that it evades the normal methods of detection. Typical malware detection tools look for telltale signs that a given website, file, or app contains malware. But this new malware is specifically designed to either hide the normal telltale signs, or else to morph into something else when detected. So as this new malware is detected, by the time you try to eradicate it in its original location it has moved somewhere else.

This new discovery is typical of the ongoing cat and mouse game between hackers and malware security companies. The hackers always get a leg up when they come out with something new and they generally can go undetected until somebody finally figures out what they are up to.

This whole process is described well in two reports issued by web security companies. Menlo Security reports that there was 317 million pieces of malware produced in 2014 in their State of the Web 2015: Vulnerability Report. In this report they question if the security industry is really ready to handle new kinds of attacks.

The report says that enterprises spent more than $70 billion on cybersecurity tools in 2014 but still lost nearly $400 billion as a result of cybercrime. They report that the two biggest sources of malware in large businesses come either through web browsing or from email – two things that are nearly impossible to eliminate from corporate life.

Menlo scanned the Alexa top one million web sites (those getting the most traffic) and found the following:

  • 34% of web sites were classified as risky due to running software that is known to be vulnerable to hacking.
  • 6% of websites were found to be serving malware, spam, or are part of a botnet.

The other recent report on web vulnerabilities came from Symantec, which can be downloaded here. Symantec said that hackers no longer need to break down the doors of corporate networks when the keys to hack them are readily available. That mirrors the comments by Menlo Security and is referring to the fact that companies operate software with known vulnerabilities and then take a long time to react when security breaches are announced.

The report says that in 2014 firms took an average of 50 days to implement security patches. Hackers are launching new kinds of malware and then leaping on the vulnerability before patches are in place. The biggest example of this in 2014 was the Heartbleed malware, where hackers were widely using it within 4 hours of it hitting the web while companies took a very long time to come up with a defense. Symantec says there were 24 separate zero-day attacks in 2014 – meaning an introduction of a new kind of malware that was either undetectable or for which there was no immediate defense.

Symantec reports much the same thing as Menlo Security in that the big vulnerability of malware is what it can do once it is inside of a network. The first piece of malware can hit a network in many different ways, but once there uses a number of sophisticated tools to spread throughout the network.

There is certainly nothing foolproof you can do to keep malware out of your corporate systems. But most of the ways that networks get infected are not through hackers, but though employees. Employees still routinely open spam emails and attachments and respond to phishing emails – so making sure you employees know more about malware and it’s huge negative impact might be your best defense.

Broadband CPNI

FCC_New_LogoThe FCC said before they passed the net neutrality rules that they were going to very lightly regulate broadband providers using Title II. And now, just a few weeks after the new net neutrality rules are in place, we already see the FCC wading into broadband CPNI (customer proprietary network information).

CPNI rules have been around for a few decades in the telephony world. These rules play a dual purpose of providing customer confidentiality (meaning that phone companies aren’t supposed to do things like sell lists of their customers). They also provide protection of customer calling information by requiring a customer’s explicit permission to use their data. Of course, we have to wonder if these rules ever had any teeth at all since the large telcos shared everything they had with the NSA. But I guess that is a different topic and it’s obvious that the Patriot Act trumps FCC rules.

The CPNI rules for telephone service are empowered by Section 222 of Title II. It turns out that this is one of the sections of Title II for which the FCC didn’t choose to forebear for broadband, and so now the FCC has opened an investigation into whether they should apply the same, or similar, rules for broadband customers.

It probably is necessary for them to do this, because once Title II went into effect for broadband this gave authority in this area to the FCC. Until now, customer protection for broadband has been under the jurisdiction of the Federal Trade Commission.

There clearly is some cost for complying with CPNI rules, and those costs are not insignificant, especially for smaller carriers. Today any company that sells voice service must maintain, and file with the FCC, a manual showing how they comply with CPNI rules. Further, they have to periodically show that their staff has been trained to protect customer data. If the FCC applies the same rules to ISPs, then every ISPs that sells data services is going to incur similar costs.

But one has to wonder if the FCC is going to go further with protecting customer data. In the telephone world usually the only information the carriers save is a record of long distance calls made from and to a given telephone number. Most phone companies don’t track local calls made or received. I also don’t know of any telcos that record the contents of calls, except in those circumstances when a law enforcement subpoena asks them to do so.

But ISPs know everything a customer does in the data world. They know every web site you have visited, every email you have written, everything that you do on line. They certainly know more about you than any other party on the web. And so the ISPs have possession of data about customers that most people would not want shared with anybody else. One might think that in the area of protecting customer confidentiality the FCC might make it illegal for an ISP to share this data with anybody else, or perhaps only allow sharing if a customer gives explicit permission.

I have no idea if the larger telcos use or sell this data today. There is nothing currently stopping them from doing so, but I can’t ever recall hearing of companies like Comcast or AT&T selling raw customer data or even metadata. But it’s unnerving to think that they can, and so I personally hope that the FCC CPNI rules explicitly prohibit ISPs from using our data. I further hope that if they need a customer’s permission to use their data that this is not one of those things that can be buried on page 12 of the terms of service you are required to approve in order to use your data service.

What would be even more interesting is if the FCC takes this one step further and doesn’t allow any web company to use your data without getting explicit permission to do so. I don’t have idea if they even have that authority, but it sure would be a huge shock to the industry if they tried to impose it.

The Law of Accelerating Returns

exponential-growth-graph-1Ray Kurzweil, the chief engineer at Google, was hired because of his history of predicting the future of technology. According to Kurzweil, his predictions are common sense once one understands what he calls the Law of Accelerating Returns. That law simply says that information technology follows a predictable and exponential trajectory.

This is demonstrated elegantly by Moore’s Law, in which Intel cofounder Gordon Moore predicted in the mid-60s that the number of transistors incorporated in a chip will double every 24 months. His prediction has held true since then.

But this idea doesn’t stop with Moore’s Law. The Law of Accelerating Returns says that this same phenomenon holds true for anything related to information technology and computers. In the ISP world we see evidence of exponential growth everywhere. For example, most ISPs have seen the the amount of data downloaded by the average household double every four years, stretching back to the dial-up days.

What I find somewhat amazing is that a lot of people the telecom industry, and certainly some of our regulators, think linearly while the industry they are working in is progressing exponentially. You can see evidence of this everywhere.

As an example, I see engineers designing new networks to handle today’s network demands ‘plus a little more for growth’. In doing so they almost automatically undersize the network capacity because they don’t grasp the multiplicative effect of exponential growth. If data demand is doubling every four years, and if you buy electronics that you expect to last for ten to twelve years, then you need to design for roughly eight times the data that the network is carrying today. Yet that much future demand just somehow feels intuitively wrong and so the typical engineer will design for something smaller than that.

We certainly see this with policy makers. The FCC recently set the new definition of broadband at 25 Mbps. When I look around at the demand in the world today at how households use broadband services, this feels about right. But at the same time, the FCC has agreed to pour billions of dollars through the Connect America Fund to assist the largest telcos in upgrading their rural DSL to 15 Mbps. Not only is that speed not even as fast as today’s definition of broadband, but the telcos have up to seven years to deploy the upgraded technology, during which time the broadband needs of the customers this is intended for will have increased to four times higher than today’s needs. And likely, once the subsidy stops the telcos will say that they are finished upgrading and this will probably be the last broadband upgrade in those areas for another twenty years, at which point the average household’s broadband needs will be 32 times higher than today.

People see evidence of exponential growth all of the time without it registering as such. Take the example of our cellphones. The broadband and computing power demands expected from our cellphones is growing so quickly today that a two-year-old cellphone starts to feel totally inadequate. A lot of people view this as their phone wearing out. But the phones are not deteriorating in two years and instead, we all download new and bigger apps and we are always asking our phones to work harder.

I laud Google and a few others for pushing the idea of gigabit networks. This concept says that we should leap over the exponential curve and build a network today that is already future-proofed. I see networks all over the country that have the capacity to provide much faster speeds than are being sold to customers. I still see cable company networks with tons of customers still sitting at 3 Mbps to 6 Mbps as the basic download speed and fiber networks with customers being sold 10 Mbps to 20 Mbps products. And I have to ask: why?

If the customer demand for broadband is growing exponentially, then the smart carrier will increase speeds to keep up with customer demand. I talk to a lot of carriers who think that it’s fundamentally a mistake to ‘give’ people more broadband speed without charging them more. That is linear thinking in an exponential world. The larger carriers seem to finally be getting this. It wasn’t too many years ago when the CEO of Comcast said that they were only giving people as much broadband speed as they needed, as an excuse for why the company had slow basic data speeds on their networks. But today I see Comcast, Verizon, and a number of other large ISPs increasing speeds across the board as a way to keep customers happy with their product.

How’s Your Strategic Plan?

parker_chess_set_burnt_boxwood_wood_burnt_boxwood_pieces_1000I help companies develop strategic plans., and one thing that I often find is that people think that strategic planning is the process of developing goals for their company. The first thing I have to point out to them is that having goals is great and you need them, but goals are not a strategic plan.

Having goals are an essential first step for looking into the future because they define your ultimate vision of where you want your company to go. Goals can be almost anything from increased profits, better sales, improved customer service, eliminating a network shortcoming, etc. But if you are going to try to reasonably achieve your goals you need to turn them into both a strategic plan and a tactical plan.

A strategic plan is basically a way to rate and rank your goals and turn them into an action plan. Not everybody goes about this in the same way, but a normal first step is to assess the resources you have available to achieve each of your goals. Almost every company has two primary resources that are limiting factors – cash and manpower. So it’s vital that you somehow determine how much of your scarce resources are needed to achieve each goal on your list.

This is harder than it sounds. Let’s say you have listed five goals. For each of them you want to do the following:

  • The first step is to rank your goals by importance. For example, you may have a few goals that are of top importance (like fixing a problem that is causing network outages or improving margins) while you will have other goals that are less important – at least for now. Theses perhaps the hardest part of the process because it is going to make you choose among your many goals and decide which ones are of the most importance to the business.
  • Once you have a prioritized list of goals, then the next step is to come up with a list of specific tasks necessary to achieve each goal. Be realistic and explicit in this determination. For instance, if you want to increase sales to businesses, then figure out what you think it takes to make it happen. Is that going to require more cash in the form of hiring additional sales staff or paying higher commissions? Will it take more human resources – are there key people in your organization that need to spend time to make the goal happen?
  • There is often more than one reasonable path to achieve a goal and so you also must explore the most likely alternatives paths to help determine which one that is right for you. This exploration is critical at this stage, because if you only consider one solution you will have locked yourself into a rigidly-defined path without flexibility. So spend some time brainstorming about the best ways to achieve each goal and don’t be afraid to consider multiple solutions.
  • Once you have assessed the reasonable ways to achieve each goal, you are then ready to start getting strategic. Very few organizations have enough resources to pursue all of their goals at the same time, and so you need to determine which of the possible solutions to various goals you are going to pursue. This is where you have to get realistic about what can be accomplished within the time frame of your strategic plan. For example, if you have a fixed cash budget for the following year, you obviously can’t pursue plans that cost more than you can afford. And the same with people. If achieving your goals is going to draw too much time from key people, you need to get realistic about how much can be accomplished by the resources you have. This step requires making a realistic ‘budget’ for achieving your goals in terms of your cash and key manpower limitations. I have seen strategic plans that assumed that a few key staffers would spend all of their time on the new projects, and in doing so would ignore their current workload – and such a plan is going to fail due to lack of people resources.
  • The way I like to do this process is much the same way that many people do a family budget. You start with the amount of resources available to ‘spend’, be that cash or key staff time, and then work backwards through the goals, considering the most important ones first, to see which you can afford to pursue. This can become hard because you will often end up end having to scrap some goals that are not ‘affordable’ and so this process often means making tough choices.
  • You want to make sure that the final strategy you choose will choose goals that you can achieve in a reasonable amount of time. I’ve found that you are almost always better off by putting all of your effort into completing a few goals versus only making partial progress towards achieving many goals. You want your organization to have wins and to see progress and the best way to do this is to get the top goals on your list behind you before you tackle your next strategic plan.

The final strategic plan will end up as a list of the goals that you think you can achieve during the strategic planning time frame (should only be a few years at most). From there you are then ready to develop a tactical plan. This means establishing a very specific set of assignments, timelines, and budgets to make sure that the goals you’ve chosen can be implemented. It’s no good creating a strategic plan if you don’t take this extra step to make sure that that plan gets implemented. There are very specific ways to make sure that a tactical plan stays on schedule and on budget – but that is the topic of another blog.

If the above process sounds too challenging to tackle, then don’t hesitate to bring in outside help to facilitate the process. Often, after going through the strategic planning process a few times, businesses eventually don’t need outside help. But learning how to be strategic is like learning anything else; you will find techniques that work for your company, and once you learn the discipline of thinking strategically, you will start to see your goals come to fruition – an outcome that every company wants.

The Cost of International Calling

palm-trees2We have gotten so used to the cost of long distance calls dropping in the US that many people don’t realize that it is still very expensive to call some other places in the world.

In the US we are now used to unlimited long distance plans, and so most of us don’t think about the cost of long distance. We all still pay for it—for example, that’s one of the costs built into your cellphone bill. I imagine that there are younger people who have no appreciation that we were once very careful about making long distance calls.

I remember in the early 80s when AT&T announced a ‘reduced’ long distance plan that had a flat rate of 12 cents per minute. Before that plan, costs varied by distance called and it was not unusual to call some places in the US that were as much as 50 cents per minute. Long distance rates also varied by time of day and people would wait until midnight to call relatives to get the nighttime rates.

But over the years the FCC has deliberately taken steps to reduce long distance rates since they figured that might be the one thing they could do that would most boost the US economy. And it worked.

At the same time that the US made a deliberate effort to reduce costs many other countries did the same. Thirty years ago it was almost universally expensive to call other countries. Part of this was due to lack of facilities; there were only a few trans-oceanic cables that were capable of carrying voice – and they were generally full all of the time with calls. But today it’s almost as cheap to call places like Canada and a lot of Europe as it is to call in the US. And there are now many calling plans that include a number of foreign countries.

But this is not true everywhere. There are still a lot of places around the world that are very expensive to call. The rates I quote are from Comcast’s latest international long distance rates, but the rates charged by others carriers are similar. Even today it costs $2.90 per minute to call Afghanistan. A few years ago that was over $5 per minute. Surprisingly, it’s less than half that rate at $1.20 per minute to call Antarctica.

It costs a lot more in general to call islands. Most of the Caribbean is between $0.40 and $1.20 per minute (although the US Virgin Islands are at US rates). The pacific islands in Micronesia are generally around $1 per minute.

In general there are two reasons why rates are so high in some places. For some islands, the cost of the calling reflects the expensive cost of the facilities needed to complete the calls. Such calls these days are often completed over satellite since there are still places not connected to the world by undersea fibers. But the other big cost component is government tariff rates, charged as a moneymaker for the local governments. This is why you see calls to North Korea costing $3.28 per minute, calls to Laos costing $2.43, and calls to Myanmar costing $2.17.

In most cases these expensive rates are bypassed using voice over IP across the Internet, and so people that live in places with expensive rates usually bypass those costs and use the Internet to talk to family overseas. In many countries that is a risk and you can be prosecuted for bypassing the tariff rates. I remember when VoIP was new there were entrepreneurs in Jamaica who set up calling over the Internet and then dumped the calls into the local network. It seemed that the Jamaican government would arrest a few VoIP vendors every week, but new ones always sprung up to take their places. Now only the most repressive countries still try to police this while most have bowed to the reality of VoIP.

I remember working with many clients in the 70s and 80s and one thing I always looked at was their long distance revenues. Even the smallest telcos would have a few residential customers that made over $1,000 per month in long distance calls and many others who spent hundreds of dollars per month. I remember when parents would groan if one of their kids got a boyfriend or girlfriend who was long distance. We’ve come a long way from those days, and unless you have a reason to call a handful of expensive countries or islands a lot, long distance is now one of those things that you don’t give a second thought about.