KPMG’s Cloud Survey

Cloud_computing_icon_svgLate last year KPMG published the results of a survey on cloud computing. You can see the results here. The survey was given to 500 CEOs, CIOs, and CFOs of large companies with annual revenues of over $100 million.

You might ask why these results matter much to anybody who is smaller than that. I think it matters because in the IT world, what the big companies do moves downhill to the rest of us. As an example, if the large companies, with all of their buying power, move away from enterprise level routers, then the rest of us will be dragged in that same direction as the market for enterprise routers stops evolving and dries up. The large companies collectively have the majority of the buying power in this market.

When cloud computing got started a few years back the original sales pitch for the change was all about cost savings. Cloud vendors all touted that it was far cheaper to use computing resources in large data centers than to own your own computer resources that includes a dedicated staff to operate an IT network. And while cost savings is still part of the reason to change to the cloud, it’s no longer the only reason. The survey found the following reasons given by large companies for using the cloud:

  • Cost savings – 49%
  • Enabling mobile work forces – 42%
  • Improving customer service and partner interfaces – 37%
  • Understanding corporate data better – 35%
  • Accelerating product development – 32%
  • Developing new business lines – 30%
  • Sharing data globally – 28%
  • Faster time to market – 28%

In a similar survey from 2012 the responses were primarily about cost savings. For example, the emphasis on enabling a mobile workforce then was only given as a reason by 12% of respondents. What bought about such a big shift in the way that large companies think about the cloud in only a two year period?

The reason is that the cloud was originally a hardware transition. It let companies stop having to buy and maintain expensive computer systems and a large staff to operate them. Executives were tired of constantly being told that their systems were obsolete (and in our fast changing world they usually were). More importantly, executives were tired of being told that it was too hard to accomplish whatever they most wanted to do and they felt that their IT functions were often holding back their company. Many executives thought of their IT department as a black box which they didn’t understand very well.

In the last few years it has become clear that the cloud is not just a substitute for hardware and staff, but is also a catalyst for changing software. Large corporations have often been locked into huge software systems from companies like Oracle or Microsoft. While these packages did some things very well, there were some functions where they were just adequate, and other functions for which they were downright horrible. But the computer systems and IT staff tended to make everything work with a few integrated software packages rather than support a lot of different programs for various functions.

At the same time there has been a revolution in network hardware and a shift to the efficiencies of using large data centers, there is also a host of new software on the market that is extremely good at just a few functions. Companies have found that while they were breaking free of the restrictions of an in-house IT network and staff that they have also been able to break the bundles of the large software packages.

And this can be seen by looking at the claims that the respondents to the survey made about what they have already been able to achieve through the cloud:

  • Improve business performance – 73%
  • Improve the level of service automation – 72%
  • Reduce costs – 70%
  • Better integration of systems – 68%
  • Introduce new features and functions – 68%
  • Enhance interaction with customers and partners – 67%
  • Rapidly deploy new solutions – 67%
  • Replace legacy systems – 66%

Most of these results reflect changes in software as much as they represent just changing computer platforms. This is not to say that a shift to the cloud is seamless. For example, there is a lot of corporate anxiety about the security of their data. But overall, the large corporations are so far very happy with the shift and most plan on transitioning more to the cloud. Smaller companies are going to feel the tug to move to the cloud for the same reasons. It’s likely that you can save money and begin using newer and better software after such a change.

New Technology – Medical Applications

Medical_Software_Logo,_by_Harry_GouvasThis month I look at some technology advances in medicine.

Robot Drug Researcher. A team at the University of Manchester has developed an AI system they call Eve which is designed to assist in drug research. Eve is a combination of a computer and a system of mechanical arms that lets Eve mix various chemicals to search for new compounds. The drug industry has already developed sophisticated software that helps to visualize chemical compounds, and Eve adds the ability to ‘learn’ on top of the existing software platforms.

During the original test of concept for Eve, the computer found a possible useful compound for fighting drug-resistant malaria. Eve found a chemical called TNP-470 that effectively targets an enzyme that is key to the growth of Plasmodium vivax, one of the parasites that causes malaria. Many drugs do their job by ‘fitting’ a chemical into a disease agent to block its function, in the way that a key fits into a lock. Drug chemists often search for cures by looking at classes of chemicals that might work in a given application based upon the shapes. But then they have to slog through hundreds of thousands of tests to find the perfect solution. Eve can automate and speed up that search process. The team was not really expecting this kind of immediate breakthrough, but it shows the potential for automating the searching process.

Microchips Deliver Drugs Precisely. Biomedical engineer Robert Langer has developed a system that will allow an implanted chip to release drugs in response to a WiFi signal. The chips have up to a thousand tiny wells and can hold many doses of the same drug or a number of different drugs. Each little well has a cover that can be opened in response to a wireless signal.

This technology could be useful in treating some forms of cancer as well as certain kinds of diabetes where small timed releases of drugs are the only effective treatment as compared to a large injection from a shot or a pill. With 1,000 possible doses the device could deliver drugs over a long period of time and might also be useful for such things as birth control.

Organs-on-a-Chip. Fraunhofer, a German research company, recently announced that it has developed what they are calling organ-on-a-chip. The company has developed chips where human cells from various organs are put into tiny wells and connected by tiny canals. The chips, when fully functioning, can then represent a functioning human for the purposes of testing the effects of various drugs.

The promise for the technology is that it will be able to greatly speed up the drug testing process, and can possibly replace having to test drugs on animals before a drug can be tested on humans. Normal drug testing can take years, and researchers have never been fully enamored with animal testing since they have always known that many drugs affect humans differently than animals; this testing method can give more precise feedback. The hope is that the organs-on-a-chip will knock years off of the testing process for promising drugs while also more quickly identifying drugs that have a detrimental effect on human tissues.

Robot Orderlies. The University of California, San Francisco’s Mission Bay wing is testing a robot orderly they have named Tug. The robots are being used to shuttle things around the hospital, and they deliver such things as clean linens, meals or drugs to rooms as needed. The hospital plans on having a fleet of 25 of the robots by this month. Already each of the robots at the hospital is logging 12 miles of hallway travel per day.

The robots navigate using built-in maps of the hospital. They are programmed to not be intrusive and, for example, will patiently wait to get past people who are blocking a hallway. The robots take the elevators which they call by wireless signal. There have been trials of robot orderlies before, but this is the largest trial to date and the robots are taking over a host of orderly services.

Smartphone as Medical Monitor. Apple has teamed up with a number of leading hospitals to conduct trials where they will use the iPhone 6 and smart watches to monitor patients. The idea is to monitor patients 24 hours per day after they have been released from the hospital for treatment of major health problems. The monitoring will give the doctors the ability to watch key metrics such as heart beat, blood pressure, blood sugar, and other important indicators, much as they would have done if the patient was still in the hospital.

Apple is calling the technology package HealthKit and it puts them far ahead of rivals such as Samsung and Google since over a dozen hospitals are now trialing the technology. The trials are to help doctors determine the degree to which tracking patients’ symptoms will help their treatment. For now the trials are working with critically ill patients, but the eventual plan is to develop routine tracking for the general population that will help to spot health issues before they become otherwise apparent. You can envision someday getting a call from your doctor asking you to come in since your blood pressure or blood sugar are outside normal bounds.

A Path to the Infosphere

Futurama_Comic_S1Eric Schmidt of Google recently made headlines when he said: “The Internet will disappear.” By that he meant that it will become so seamless that it will surround us everywhere. Obviously a lot of things have to happen before we can all move to the ubiquitous infosphere. For instance, as I just covered in another blog, we will need small nonintrusive wearables. Gone will be the fitness trackers and smartwatches and even the cell phones. We’ll instead have to have some small device that is always with us and that can communicate with us both audibly and visually. This could be an earbud or even implanted chip along with some device that can cast images into our retinas, something far less clunky than Google glass.

But aside from better devices, the biggest change is going to have to be in the way the web functions. Gone would be today’s interface with the web through browsers where we interface with one program or one website at a time. The way we work on computers today is too linear and while we may have many programs running, each of them is separate, and we dip into them one at a time.

The wireless world has already shown us a partial path to the future by virtue of having moved to a world of apps rather than URL websites. But apps still suffer from the same problem of being used one at a time, and there is very little linking between apps today. There are apps today that want to dip into other apps to grab existing data, but I normally get the impression that this is more for the benefit of the app company than it is for the user. I constantly run across apps that ask if they can have access to my contacts list on Facebook or LinkedIn and I always say no. Unless it’s some sort of a communications app, these companies are just fishing for more leads to try to sell their product. We don’t need more advertising linking, but functional linking.

There is an attempt in the app world to establish better links between apps. For instance, Google’s App Indexing and Facebook’s App Links are the start of an effort to create what the industry is calling deep linking, which are ways for apps to usefully share data for the benefit of the user. There are a number of other software companies working in this area.

Today, content providers build custom cross-linking libraries to fulfill this function. The cross-linking process makes it possible to move seamlessly from one app to another. But such links are custom-made and are very specific to a small set of apps. The links share data fields so that a customer using one app can be sent to a second app without logging in again and without having to provide basic data about who they are.

But what’s really needed in the long run, if we are going to get to a seamless infosphere, is a software system that automatically allows a user to shift from one app to another without ever having to log in. The whole idea of logging in has to go away. We need apps that can authenticate who we are and that don’t have to ask us basic questions about who we are. So another thing that is needed for a seamless infosphere is some sort of foolproof authentication. Apps need to be able to trust that we are who we say we are.

But the flip side of that is that as users we need to know that apps won’t spy on us and suck out every bit of information about us. When apps can talk to each other without our permission, we need to have some sort of privacy matrix established that defines what we are willing to share and not share. And the apps must follow the rules that each of us establishes. So another thing needed for us to feel safe in the infosphere is some sort of trustworthy privacy rules that all programs we interface with will follow.

One of the early dangers I see from the linking process is that it could become very proprietary. If Google, Facebook, or Apple develops a suite of linked apps that work well together but that don’t link to outside apps, then we will have taken a step backwards and will have undone the intent of the recent net neutrality ruling. That ruling ensures that large ISPs don’t restrict entry of new competitors into the web market. But that ruling does not protect against the large content providers getting so large and ubiquitous that they kill off competitors by locking them out of linked systems. So eventually we are going to need net neutrality rules for content providers.

So we are almost there for a ubiquitous web. All we need are a total migration to apps, better wearable devices, foolproof authentication, better privacy screens to protect our data, rules that allow any app to safely link with others, and net neutrality rules that don’t let any content provider control the infosphere. Come on Silicon Valley. We’re waiting.

WiFi Blocking

Wi-FiThe FCC recently ruled against Marriott for blocking customers’ access to WiFi generated by the cellphones. Guests who tried to use their own WiFi were deauthenticated so that the only WiFi option available was to use the one sold by the hotel, for a hefty daily fee. The Marriott Wifi engineers testified that they had done this to protect against interference to their own WiFi networks for paying customers. But the FCC ruled against Marriott and told them to stop blocking customers.

My gut feeling is that Marriott was doing this for the money, because they must have gotten a ton of customer complaints and it’s hard to think that they continued to back their IT engineers over the public. But as the FCC ruling made clear, it didn’t really matter why Marriott did it. There is no valid reason to block WiFi.

What Marriott failed to realize is that WiFi is truly a public spectrum. And while it is open to everybody, it also comes with some rules about how the public is allowed to use it. The FCC spectrum rules are clear on this, but I suspect that even many industry people have never read them. Certainly the manufacturers of WiFi devices don’t educate their customers very much about the obligations of using the spectrum.

The following portions of the FCC rules, although written in tech-speak, sum up the WiFi obligations:

§15.5   General conditions of operation.

(a) Persons operating intentional or unintentional radiators shall not be deemed to have any vested or recognizable right to continued use of any given frequency by virtue of prior registration or certification of equipment, or, for power line carrier systems, on the basis of prior notification of use pursuant to §90.35(g) of this chapter.

(b) Operation of an intentional, unintentional, or incidental radiator is subject to the conditions that no harmful interference is caused and that interference must be accepted that may be caused by the operation of an authorized radio station, by another intentional or unintentional radiator, by industrial, scientific and medical (ISM) equipment, or by an incidental radiator.

What these rules mean are that nobody has any more right to use the WiFi than anybody else. It does not matter if you are the first one using the spectrum in an area – everybody else has the right to use the spectrum in that same area as well. Further, you are allowed to use spectrum as long as you don’t harm other users, with the caveat that lawful interference must be accepted. With most licensed spectrum bands no interference is allowed. But WiFi, by its very definition as a public spectrum, can have mountains of interference and still be operating within the law. So when the rules say that you can’t cause harmful interference, this is interpreted for WiFi to mean that you can’t somehow stop others from using the spectrum – but that normal interference with WiFi is perfectly lawful and expected.

The Marriott engineers also tried to argue that deauthentication is not the same thing as interference. The system they were using repeatedly sent out signals that stopped WiFi users from staying connected to their cellular WiFi networks. Marriott says they weren’t blocking the spectrum, just the use of the spectrum, a very fine distinction that the FCC also didn’t buy.

And so the Marriott engineers were wrong about a few very basic rules of spectrum usage. They had no more right to the WiFi spectrum inside the hotel than any of their customers. And it doesn’t matter if customer use of WiFi from cellphones interferes with Marriott WiFi, since the cellphone WiFi is lawful and the interference is legally acceptable.

This is a caution to anybody who wants to use WiFi in a commercial application. Whether you are a wireless ISP (WISP), a hospital, an airport, or a coffee shop, you have no more right to the spectrum than anybody else. Again, this is something that the makers of WiFi equipment don’t tell their customers, or at least not outside of the very small print. If you really need interferences free transmissions, you ought to be looking for a different spectrum to use. There are absolutely no guarantees with WiFi, regardless of what the claims of the vendor who sold you your gear.

There have been several attempts over the years to build large public outdoor WiFi networks. Almost by definition these networks are going to fail, or at least perform incredibly poorly in some places. Such networks have to compete against every home router, public hotspot and other uses of the spectrum in the same area. Further, like cellular networks, WiFi networks can become overloaded with too many simultaneous users.

Some of us are old enough to remember the days when the 900 MHz spectrum got overloaded. This is are also free public spectrum and it was originally used for everything from cordless phones to garage door openers. It got so overloaded that eventually you couldn’t hold a 900 MHz phone connection long enough to finish a call. Because it seems like everybody has a plan to use WiFi that the day might come whenthis spectrum will also get overloaded in some places. And the only real solution for this will be for the FCC to provide more public spectrum. Because WiFi interference is lawful and expected, as much as users might hate it.

I Hate Passwords

123456I honestly hate passwords. It seems like every site that I register with has some slightly different rules for what constitutes an acceptable password. Having different passwords drives me crazy because my brain can remember things like phone numbers, but passwords seem to elude my memory.

And now I have been reading that the rules on the various sites having to do with password safety are largely in vain anyway since it’s now pretty easy to crack the kinds of passwords that most sites require you to create. I suspect that these sites all know this, but they put you through the effort to come up with an acceptable password to give you a false sense of security.

We all know what good passwords are supposed to be. They must be eight or more characters, with a mix of upper and lower case letters, numbers, and symbols. They should not include any word used in a dictionary including silly substitutions like using ‘!’ instead of the letter L. And you are not supposed to repeat passwords on multiple sites.

And so we sit at each new web site (and it seems that everybody wants you to create a password these days) and we cook up some dumb new combination that we are never ever going to remember just so we can shop for tea or read a news article. We try combinations until the site takes it and also shows us a nice green bar to prove that our new password is a safe one.

But this is more or less a waste of time. The bad guys who crack passwords also know all of these rules and the rules actually make it easier for them to crack your password. It doesn’t seem like somebody ought to be able to crack a password like aN34%6!bJ, but they can, and fairly easily. (Have I mentioned yet that I also hate sites that make me pick stupid passwords?)

Of course, it’s even easier to crack the really stupid passwords. For yet another year ‘123456’ is still the most commonly found password followed closely by ‘password’. But let’s face it, anybody using those is not really caring too much if they get hacked or if somebody deletes their Pinterest page or sees the news articles they have saved.

So how do hackers crack our passwords so easily? They get a surprisingly large number of them directly from people through phishing. People type their passwords into fake websites all of the time and then are dumb enough to also type in credit card numbers or bank account numbers when asked. There is not much more advice about that other than – don’t do it! You are much better off if you are like me and you don’t even know your bank account number! (But luckily my wife does).

But hackers also get millions of passwords by breaking into commercial sites and stealing their password and account files. This lets hackers get access to huge numbers of passwords at the same time. Almost all websites save passwords using an encryption algorithm. A simple password like dog might be saved as a 30-digit mix of letters and numbers called a ‘hash’. When web sites are hacked and the bad guys make off with millions of passwords, they don’t get your actual passwords, but a file of these encrypted hashes.

Hackers then attack the pile of hashes with computers that can look at billions of cracking attempts per second as they try to reverse engineer the algorithm used to create the hashes. They start with all of the easy-to-crack passwords like ‘123456’ which will turn up multiple times in the pile. Eventually they figure out the algorithm and then they can figure out many of the passwords in the file they have stolen.

I say many passwords, because it turns out that there is one set of passwords that is harder to crack than most. It comes from stringing together long chains of nonsense words that you can remember but that are not commonly used together. For instance if your password is ‘frogflatchevydog’ to memorialize the day you ran over a frog and then your dog sniffed it, then such a password is much harder to crack than a normal one. No password is impossible to crack, but the amount of effort required to crack the above password might take somebody a hundred hours more effort than cracking easier passwords, and it’s likely that nobody will put in the effort unless they really want you specifically.

Keep in mind that you can’t string together any common phrases that can be tested easily. For example, allmimsyweretheborogoves’ is relatively easy to crack because it’s all over the Internet since many people love Lewis Carroll books. Cracking programs search for billions of common phrases that they find on the Internet, meaning you probably can’t now use my great password suggestion about the flat frog.

Hopefully we are soon moving to a day when we won’t need any passwords. There has to be something better, which would be a combination of multiple biometric readings from your own body. Hackers have already shown that they can crack a one-layer biometric password like a fingerprint. But it gets mathematically nearly impossible to crack a system that uses multiple biometric readings simultaneously. So the ultimate password is eventually going to be you. That is a password I can finally like.

Is the Universal Translator Right Around the Corner?

star trek comm badgeWe all love a race. There is something about seeing somebody strive to win that gets our blood stirring. But there is one big race going on now that it’s likely you never heard of, which is the race to develop deep learning.

Deep learning is a specialized field of Artificial Intelligence research that looks to teach computers to learn by structuring them to mimic the neurons in the neocortex, that portion of our brain that does all of the thinking. The field has been around for decades, with limited success, and has needed faster computers to make any real headway.

The race is between a few firms that are working to be the best in the field. Microsoft and Google have gone back and forth with public announcements of breakthroughs, while other companies like Facebook and China’s Baidu are keeping their results quieter. It’s definitely a race, because breakthroughs are always compared to the other competitors.

The current public race deals with pattern recognition. The various teams are trying to get a computer to identify various objects in a defined data set of millions of pictures. In September Google announced that it had the best results on this test and just this month Microsoft said their computers beat not only Google, but did better than what people can do on the test.

All of the companies involved readily admit that their results are still far below what a human can do naturally in the real world, but they have made huge strides. One of the best known demonstrations was done last summer by Google who had their computer look at over 10 million YouTube videos and asked it to identify cats. Their computer did twice as good as any previous test, which was particularly impressive since the Google team had not pre-defined what a cat was to the computer ahead of time.

There are some deep learning techniques in IBM’s Watson computer that beat the best champs in Jeopardy. Watson is currently being groomed to help doctors make diagnoses, particularly in the third world where there is a huge lack of doctors. IBM has also started selling time on the machine to anybody and there is no telling all of the ways it is now being used.

Probably the most interesting current research is in teaching computers to learn on their own. This is done today by enabling multiple levels of ‘neurons’. The first layer learns the basic concept, like recognizing somebody speaking the letter S. Several first-layer inputs are fed to the second layer of neurons which can then recognize more complex patterns. This process is repeated until the computer is able to recognize complex sounds.

The computers being used for this research are already getting impressive. The Google computer that did well learning to recognize cats had a billion connections. This computer was 70% better at recognizing objects than any prior computer. For now, the breakthroughs in the field are being accomplished by applying brute computing force and the cat-test computer used over 16,000 computer processors, something that only a company like Google or Microsoft has available. .

Computer scientists all agree that we are probably still a few decades away from a time when computers can actually learn and think on their own. We need a few more turns of Moore’s Law for the speed of computers to increase and the size of the processors to decrease. But that does not mean that there are not a lot of current real life applications that can benefit from the current generation of deep learning computers.

There are real-world benefits of the research today. For instance, Google has used this research to improve the speech recognition in Android smartphones. But what is even more exciting is where this research is headed for the future. Sergey Brin says that his ultimate goal is to build a benign version of HAL from 2001: A Space Odyssey. It’s likely to take multiple approaches in addition to deep learning to get to such a computer.

But long before a HAL-like computer we could have some very useful real-world applications from deep learning. For instance, computers could monitor complex machines like electric generators and predict problems before they occur. They could be used to monitor traffic patterns to change traffic lights in real time to eliminate traffic jams. They could be used to enable self-driving cars. They could produce a universal translator that will let people with different languages converse in real-time. In fact, in October 2014, Microsoft researcher Rick Rashid gave a lecture in China. The deep learning computer transcribed his spoken lecture into written text with a 7% error rate. It then translated it into Chinese and spoke to the crowd while simulating his voice. It seems like with deep learning we are not far away from having that universal translator promised to us by science fiction.

Deceptive Billing Practices

shockIn case you haven’t looked close at your cable bill lately, there are likely a number of mysterious charges on it that look to be for something other than cable TV service. There was a day not too many years ago when a cable bill was simple. The bill would list the cable package you purchased as well as some sort of local franchise tax. There also might have been some line-item purchases if you bought pay-per-view movies or watched wrestling or other pay-per-view events.

But cable bills have gotten a lot more complicated because cable companies have been slyly introducing new charges on their bills in an effort to disguise the actual price of their basic cable packages. Here are a few of the charges I have heard about or seen on recent cable bills:

  • Broadcast TV Fee. This is a new fee where cable companies are putting some of the increases that they are having to pay for access to the broadcast networks of ABC, CBS, Fox and NBC. You can sympathize some with the cable operators on this fee since a decade ago cable companies got to carry these networks for free. But the network owners finally woke up to the fact that they could charge retransmission fees and since then the rates for carrying these networks has grown to roughly $2 per network, per customer, per month. But still, these fees ought to be part of basic cable, which is the smallest package that includes the core channels and that must be then carried with every other cable package.
  • Sports Programming Fees. It’s debatable whether sports programming or local retransmission fees have grown the most over the last decade. Certainly there was a day when there was only ESPN and a handful of other minor sports channels. But now cable systems are packed full of sports channels and each of them raises rates significantly every year to pass on the fees they pay to sports leagues to carry their content. The problem with starting a new fee to cover some of the increases in sports programming is that it clearly foists the cost of sports programming on everybody, when surveys show that a majority of customers are not very interested in sports outside of maybe the NFL.
  • Public Access Fee. In many cities the cable companies are required to carry channels that cover local government meetings and other local events. Other than having to reserve a slot on the cable system there is normally not much actual cost associated with these channels. So it’s incredibly cynical for a cable company to invent a fee to charge people to watch a channel that the cable company has agreed to carry, and for which they have very little cost.
  • Regulatory Recovery Fee. This one has me scratching my head since most cable companies are lightly regulated and pay very few taxes other than franchise fees, which they already put directly onto people’s bills. This fee seems to be pure deception to make people think they are paying taxes, when instead this is a fee that the cable company pockets.

Additionally, cable companies have recently really jacked up the cost of both settop boxes and cable modems. Interestingly, the actual cost of settop box cost at $80 – $100 has dropped over the last decade and continues to drop. It’s the same with cable modems. It’s hard to justify paying a monthly fee of up to $9 for a cable modem box that probably costs $80. Customers can theoretically opt out of both of these charges, but the large cable companies make it really hard to do so.

The idea of misnamed fees has been around for a while and started with telephone service. Starting back in 1984 the FCC allowed the telcos to migrate some of the charges that they used to bill to long distance companies for using the local loop to homes to a fee directly assessed on customers. Since then, telcos have had a separate fee called a Subscriber Line Charge, or an Access Fee, or sometimes an FCC Fee on their bills. But this was never a tax, as most customers assume, and the telco simply pockets this money as part of local rates. When the cable companies got into the voice business they largely copied this same fee, even though they never had to make the same shift of access revenues that created the charge. The FCC ought to do away with this fee entirely and require it be added to local rates where it belongs.

I think perhaps one of the reasons that the cable companies are so against Title II regulation is that these kinds of billing practices then come under FCC scrutiny. It’s hard to think of these various fees as anything other than outright deception and fraud. The companies that charge them are trying to be able to say in advertising that their rates are competitive, when in fact, by the time you add on the various ‘fees’, the actual cost for their products are much higher than what they advertise. I’m also surprised that the FTC has not gone after these fees since they are clearly intended to deceive the general public about what they are buying.

You might sympathize with the cable companies a little in that they have been bombarded year after year with huge increases in the cost of programming. But my sympathy for them evaporates once I look at the facts. When their programming costs go up each year they always raise their rates considerably more than the increased cost of programming and they use rate increases to increase their profit margin. Additionally, for the largest cable companies, part of those rate increases are for programming they own, such as the local sports networks.

We all know that the cost of cable is going to drive a lot of households to find a cheaper alternative, and when that happens the cable companies have to shoulder a lot of the blame. People might not understand the line items on their bill, but they know that the size of the check they write each year gets a lot bigger, and that is all that really matters.

The Battle of the Routers

Cisco routerThere are several simultaneous forces tugging at companies like Cisco which make network routers. Cloud providers like Amazon and CloudFlare are successfully luring large businesses to move their IT functions from local routers to large data centers. Meanwhile, other companies like Facebook are pushing small cheap routers using open source software. But Cisco is fighting back with their push for fog computing which will place smaller function-specific routers near to the source of data at the edge.

Cloud Computing.

Companies like Amazon and CloudFlare have been very successful at luring companies to move their IT functions into the cloud. It’s incredibly expensive for small and medium companies to afford an IT staff or outsourced IT consultants, and the cloud is reducing both hardware and people costs for companies. CloudFlare alone last year announced that it was adding 5,000 new business customers per day to its cloud services.

There are several trends that are driving this shift to data centers. First, the cloud companies have been able to emulate with software what formerly took expensive routers at a customer’s location. This means that companies can get the same functions done for a fraction of the cost of doing IT functions in-house. The cloud companies are using simpler, cheaper routers that offer brute computing power which also are becoming more energy efficiency. For example, Amazon has designed all of the routers used in its data centers and doesn’t buy boxes from the traditional router manufacturers.

Businesses are also using this shift as an opportunity to unbundle from the traditional large software packages. Businesses historically have signed up for a suite of software from somebody like Microsoft or Oracle and would live with whatever those companies offered. But today there is a mountain of specialty software that outperforms the big software packages for specific functions like sales or accounting. Both the hardware and the new software are easier to use at the big data centers and companies no longer need to have staff or consultants who are Cisco certified to sit between users and the network.

Cheap Servers with Open Source Software.

Not every company wants to use the cloud and Cisco has new competition for businesses that want to keep local servers. Just during this last week both Facebook and HP announced that they are going to start marketing their cheaper routers to enterprise customers. Like most of the companies today with huge data centers, Facebook has developed its own hardware that is far cheaper than traditional routers. These cheaper routers are brute-force computers stripped of everything extraneous and that have all of their functionality defined by free open source software; customers are able to run any software they want. HP’s new router is an open source Linux-based router from their long-time partner Accton.

Cisco and the other router manufacturers today sell a bundled package of hardware and software and Facebook’s goal is to break the bundle. Traditional routers are not only more expensive than the new generation of equipment, but because of the bundle there is an ongoing ‘maintenance fee’ for keeping the router software current. This fee runs as much as 20% of the cost of the original hardware annually. Companies feel like they are paying for traditional routers over and over again, and to some extent they are.

These are the same kinds of fees that were common in the telecom industry historically with companies like Nortel and AT&T / Lucent. Those companies made far more money off of maintenance after the sale than they did from the original sales. But when hungry new competitors came along with a cheaper pricing model, the profits of those two companies collapsed over a few years and brought down the two largest companies in the telecom space.

Fog Computing.

Cisco is fighting back by pushing an idea called fog computing. This means having limited-function routers on the edge of the network to avoid having to ship all data to some remote cloud. The fog computing concept is that most of the data that will be collected by the Internet of Things will not necessarily need to be sent to a central depository for processing.

As an example, a factory might have dozens of industrial robots, and there will be monitors that constantly monitor them to spot troubles before they happen. The local fog computing routers would process a mountain of data over time, but would only communicate with a central hub when they sense some change in operations. With fog computing the local routers would process data for the one very specific purpose of spotting problems, which would save the factory-owner from paying for terabits of data transmission, while still getting the advantage of being connected to a cloud.

Fog computing also makes sense for applications that need instantaneous feedback, such as with an electric smart grid. When something starts going wrong in an electric grid, taking action immediately can save cascading failures, and microseconds can make a difference. Fog computing also makes sense for applications where the local device isn’t connected to the cloud 100% of the time, such as with a smart car or a monitor on a locomotive.

Leave it Cisco to find a whole new application for boxes in a market that is otherwise attacking the boxes they have historically built. Fog computing routers are mostly going to be smaller and cheaper than the historical Cisco products, but there is going to be a need for a whole lot of them when the IoT becomes pervasive.

If You Think You Have Broadband, You Might be Wrong

Speed_Street_SignThe FCC has published the following map that shows which parts of the country they think have 25 Mbps broadband available. That is the new download speed that the FCC recently set as the definition of broadband. On the map, the orange and yellow places have access to the new broadband speed and the blue areas do not. What strikes you immediately is that the vast majority of the country looks blue on the map.

The first thing I did, which is probably the same thing you will do, is to look at my own county. I live in Charlotte County, Florida. The map shows that my town of Punta Gorda has broadband, and we do. I have options up to 110 Mbps with Comcast and I think up to 45 Mbps from CenturyLink (not sure of the exact speed they can actually deliver). I bought a 50 Mbps cable modem from Comcast, and they deliver the speed I purchased.

Like a lot of Florida, most of the people in my County live close to the water. And for the most parts the populated areas have access to 25 Mbps. But there are three urban areas in the County that don’t, which are parts of Charlotte Beach, parts of Harbor View and an area called Burnt Store.

I find the map of interest because when I moved here a little over a year ago I considered buying in Burnt Store. The area has many nice houses on large lots up to five acres. I never got enough interest in any particular house there to consider buying, but if I had, I would not have bought once I found there was no fast broadband. I don’t think I am unusual in having fast Internet as one of the requirements I want at a new home. One has to think that in today’s world that housing prices will become depressed in areas without adequate Internet, particularly if they are close to an area that has it.

The other thing that is obvious on the map of my county is that the rural areas here do not have adequate broadband, much like most rural areas in the country. By eyeball estimate it looks like perhaps 70% of my county, by area, does not have broadband as defined by the FCC. Some of that area is farms, but there are also a lot of large homes and horse ranches in those areas. The map tells me that in a county with 161,000 people that over 10,000 people don’t have broadband. Our percentage of broadband coverage puts us far ahead of most of the rest of the country, although the people without broadband here probably don’t feel too lucky.

I contrast the coasts of Florida by looking at the Midwest. In places like Nebraska it looks like nobody outside of decent sized towns has broadband. There are numerous entire counties in Nebraska where nobody has access to 25 Mbps broadband. And that is true throughout huge swaths of the Midwest and West.

There are pockets of broadband that stick out on the map. For example, there is a large yellow area in rural Washington State. This is due to numerous Public Utility Districts, which are county-wide municipal electric systems, which have built fiber networks. What is extraordinary about their story is that by Washington law they are not allowed to offer retail services, and instead offer wholesale access to their networks to retail ISPs. It’s a hard business plan to make work, and still a significant amount of fiber has been built in the area.

And even though much of the map is blue, one thing to keep in mind that the map is overly optimistic and overstates the availability of 25 Mbps broadband. That’s because the database supporting this map comes from the National Broadband Map, and the data in the map is pretty unreliable. The speeds shown in the map are self-reported by the carriers who sell broadband, and they frequently overstate where they have coverage of various speeds.

Let’s use the example of rural DSL since the delivered speed of that technology drops rapidly with distance. If a telco offers 25 Mbps DSL in a small rural town, by the time that DSL travels even a mile out of town it is going to be at speeds significantly lower than 25 Mbps. And by 2–3 miles out of town it will crawl at a few Mbps at best or not even work at all. I have helped people map DSL coverage areas by knocking on doors and the actual coverage of DSL speeds around towns looks very different than what is shown on this map.

Many of the telcos claim the advertised speed of their DSL for the whole area where it reaches. They probably can deliver the advertised speeds at the center of the network near to the DSL hub (even though sometimes this also seems to be an exaggeration). But the data supplied to the National Broadband Map might show the same full-speed DSL miles away from the hub, when in fact the people at the end of the DSL service area might be getting DSL speeds that are barely above dial-up.

So if this map was accurate, it would show a greater number of people who don’t have 25 Mbps broadband available. These people live within a few miles of a town, but that means they are usually outside the cable TV network area and a few miles or more away from a DSL hub. There must be many millions of people that can’t get this speed, in contradiction to the map.

But the map has some things right, like when it shows numerous counties in the country where not even one household can get 25 Mbps. That is something I can readily believe.

Wearables Were Doomed to Fail

WearableOne thing I’ve noticed about tech is that the industry shows amazing exuberance for any new product. Just in the last few years we have seen huge sales forecasts for various home automation devices, for 3D printers, and for wearables. But all of these products have not done nearly as well in the market as industry analysts predicted.

We are certainly seeing that with wearables. Analysts had predicted that 90 million wearables would ship in 2014 and the actual count was closer to 52 million. And shipments should drop sharply again this year. It seems like every company that builds technology jumped into the fitness tracker market. It felt like a new brand hit the market every few weeks.

The exuberance for fitness trackers has me scratching my head. My wife runs long distances, and she did not see any reason to buy a fitness wearable. She uses a generation older technology which is basically a pedometer with GPS and that works well enough for what she needs. The device tells her how far she runs and can track her routes, and she feels no need to have a machine that tracks her 24 hours per day. If somebody who is a serious runner doesn’t see the sense in the device then there are probably not that many people who really need what a fitness tracker can do.

The market for the devices wasn’t helped when several studies last year showed that fitness trackers aren’t even very good at some of their basic functions like measuring calories burned. I never expected them to be because there are a lot more variables in that equation than just the steps that somebody runs.

The real question with fitness trackers or most other wearables is the value proposition they offer to people. How many people are willing to shell out the money and then use a device to tell them how many steps they have taken or how many calories they have burned? I think the people who made fitness trackers had an unrealistic hope that the devices were going to somehow change behavior and get millions of people off the couch and out running. But devices don’t change people. To be successful a device needs to be relevant and fit into our existing life. The device must have a compelling reason for us to use it.

Wearables in their current form don’t fit into very many people’s lives. They don’t, of themselves, make you fitter, thinner, or happier. They do supply you with basic data, but it now seems that we can get that same information for free from our smartphones. For a device to be successful it has to give people immediate satisfaction.

I think this is why the only two really successful devices recently have been the iPod and the smartphone. The iPod had a big market burst because everybody wanted to carry around all of the music they love. And the smartphone hit the market at exactly the right time. We already had a generation of people who were online and the smartphone added mobility to something we were already doing. But the smartphone didn’t stop there and it stays relevant by adding new amazing functions for those willing to delve into apps.

We now see fitness trackers morphing slightly into smartwatches. Apple is trying hard to paint the smartwatch as something brand new, but they look no more compelling to me. What can a smartwatch do that will significantly enhance my life if I am already carrying my smartphone? If the smartwatch makers can answer that question then they have a chance for success. If not, it will be another fad and a tech device bubble that will soon burst.

I hope I don’t sound too pessimistic, because I think the time will come when everybody will have a wearable. But we are a generation or more away from that time. We need a few more cycles of Moore’s laws to make chips smaller, faster, and more power-efficient. Almost every futurist paints a picture where the infosphere surrounds us wherever we go. And that probably means having some sort of computer that is always with us, and that means some sort of wearable. These future devices will take over all of the functions of the smartphone, and as such they will be wildly successful.

But there needs to be a number of breakthroughs made to get to those future devices. There needs to be a really accurate natural voice interface so that we can talk with our device easily. There needs to be some way for the future device to show us images, perhaps by broadcasting them straight to the lens in our eyes. When these devices become a true personal assistant we will find them compelling and mandatory. But until then, every device that does things that a smartphone can also do are doomed to market failure.