White Space Spectrum for Rural Broadband – Part II

Word travels fast in this industry, and in the last few days I’ve already heard from a few local initiatives that have been working to get rural broadband. They’re telling me that the naysayers in their communities are now pushing them to stop working on a broadband solution since Microsoft is going to bring broadband to rural America using white space spectrum. Microsoft is not going to be doing that, but some of the headlines could make you think they are.

Yesterday I talked about some of the issues that must be overcome in order to make white space spectrum viable. It certainly is no slam dunk that the spectrum is going to be viable for unlicensed use under the FCC spectrum plan. And as we’ve seen in the past, it doesn’t take a lot of uncertainty for a spectrum launch to fall flat on its face, something I’ve seen a few times just in recent decades.

With that in mind, let me discuss what Microsoft actually said in both their blog and whitepaper:

  • Microsoft will partner with telecom companies to bring broadband by 2022 to 2 million of the 23.4 million rural people that don’t have broadband today. I have to assume that these ‘partners’ are picking up a significant portion of the cost.
  • Microsoft hopes their effort will act as a catalyst for this to happen in the rest of the country. Microsoft is not themselves planning to fund or build to the remaining rural locations. They say that it’s going to take some combination of public grants and private money to make the numbers work. I just published a blog last Friday talking about the uncertainty of having a federal broadband grant program. Such funding may or may not ever materialize. I have to wonder where the commercial partners are going to be found who are willing to invest the $8 billion to $12 billion that Microsoft estimates this will cost.
  • Microsoft only thinks this is viable if the FCC follows their recommendation to allocate three channels of unlicensed white space spectrum in every rural market. The FCC has been favoring creating just one channel of unlicensed spectrum per market. The cellular companies that just bought this spectrum are screaming loudly to keep this at one channel per market. The skeptic in me says that Microsoft’s white paper and announcement is a clever way for Microsoft to put pressure on the FCC to free up more spectrum. I wonder if Microsoft will do anything if the FCC sticks with one channel per market.
  • Microsoft admits that for this idea to work that manufacturers must mass produce the needed components. This is the classic chicken-and-egg dilemma that has killed other deployments of new spectrum. Manufacturers won’t commit to mass producing the needed gear until they know there is a market, and carriers are going to be leery about using the technology until there are standardized mass market products available. This alone could kill this idea just as the FCC’s plans for the LMDS and MMDS spectrum died in the late 1990s.

I think it’s also important to discuss a few important points that this whitepaper doesn’t talk about:

  • Microsoft never mentions the broadband data speeds that can be delivered with this technology. The whitepaper does talk about being able to deliver broadband to about 10 miles from a given tower. One channel of white space spectrum can deliver about 30 Mbps up to 19 miles in a point-to-point radio shot. From what I know of the existing trials these radios can deliver speeds of around 40 Mbps at six miles in a point-to-multipoint network, and less speed as the distance increases. Microsoft wants multiple channels in a market, because bonding multiple channels could greatly increase speeds to perhaps 100 Mbps. But even with one channel this is great broadband for a rural home that’s never had broadband. But the laws of physics means these radios will never get faster and those will still be the speeds offered a decade and two from now when those speeds are going to feel like slow DSL does today. It seems like too many broadband technology plans fail to recognize the fact that our demand for broadband has been doubling every three years since 1980. What’s pretty good speeds today can become inadequate in a surprisingly short period of time.
  • Microsoft wants to be the company to operate the wireless databases behind this and other spectrum. That gives them a profit motive to spur the wireless spectrums to be used. There is nothing wrong with wanting to make money, but this is not a 100% altruistic offer on their part.

It’s hard to know what to conclude about this. Certainly Microsoft is not bringing broadband to all of rural America. But it sounds like they are willing to work towards making this work. But we can’t ignore the huge hurdles that must be overcome to realize the vision painted by Microsoft in the white paper.

  • First, the technology has to work and the interference issues I discussed in yesterday’s blogs need to be solved for anybody to trust using this spectrum on an unlicensed basis. Nobody will use this spectrum if unlicensed users constantly get bumped off by licensed ones. The trials done for this spectrum to date were not done in a busy spectrum environment.
  • Second, somebody has to be willing to fund the $8B to $12B Microsoft estimates this will cost. There may or may not be any federal grants ever available for this technology, and there may never be commercial investors willing to spend that much on a new technology in rural America. The fact that Microsoft thinks this needs grant funding tells me that a business plan based upon this technology might not stand on its own.
  • Third, the chicken-and-egg issue of getting over the hurdle to have mass-produced gear for the spectrum must be overcome.
  • Finally, the FCC needs to adopt Microsoft’s view that there should be 3 unlicensed channels available everywhere – something that the licensed holders are strongly resisting. And from what I see from the current FCC, there is a god chance that they are going to side with the big cellular companies.

White Space Spectrum for Rural Broadband – Part I

Microsoft has announced that they want to use white space spectrum to bring broadband to rural America. In today and tomorrow’s blog I’m going to discuss the latest thoughts on the white space spectrum. Today I’ll discuss the hurdles that must be overcome to use the spectrum and tomorrow I will discuss in more detail what I think Microsoft is really proposing.

This spectrum being called white space has historically been used for the transmission of television through the air. In the recent FCC incentive auction the FCC got a lot of TV stations to migrate their signals elsewhere to free up this spectrum for broadband uses. And in very rural America much of this spectrum has been unused for decades.

Before Microsoft or anybody can use this spectrum on a widespread basis the FCC needs to determine how much of the spectrum will be available for unlicensed use. The FCC has said for several years that they want to allocate at least one channel of the spectrum for unlicensed usage in every market. But Microsoft and others have been pushing the FCC to allocate at least three channels per market and argue that the white space spectrum, if used correctly, could become as valuable as WiFi. It’s certainly possible that the Microsoft announcement was aimed at putting pressure on the FCC to provide more than one channel of spectrum per market.

The biggest issue that the FCC is wrestling with is interference. One of the best characteristics of white space spectrum is that it can travel great distances. The spectrum passes easily through things that kill higher frequencies. I remember as a kid being able to watch UHF TV stations in our basement that were broadcast from 90 miles away from a tall tower in Baltimore. It is the ability to travel significant distances that makes the spectrum promising for rural broadband. Yet these great distances also exacerbate the interference issues.

Today the spectrum has numerous users. There are still some TV stations that did not abandon the spectrum. There are two bands used for wireless microphones. There was a huge swath of this spectrum just sold to various carriers in the incentive auction that will probably be used to provide cellular data. And the FCC wants to create the unlicensed bands. To confound things, the mix between the various users varies widely by market.

Perhaps the best way to understand white space interference issues is to compare it to WiFi. One of the best characteristics (and many would also say the worse characteristics) of WiFi is that it allows multiple users to share the bandwidth at the same time. These multiple uses cause interference and so no user gets full use of the spectrum, but this sharing philosophy is what made WiFi so popular – except for the most crowded environments anybody can create an application using WiFi and knows that in most cases the bandwidth will be adequate.

But licensed spectrum doesn’t work that way and the FCC is obligated to protect all spectrum license holders. The FCC has proposed to solve the interference issues by requiring that radios be equipped so that unlicensed users will first dynamically check to make sure there are no licensed uses of the spectrum in the area. If they sense interference they cannot broadcast, or, once broadcasting, if they sense a licensed use they must abandon the signal.

This would all be done by using a database that identifies the licensed users in any given area along with radios that can search for licensed usage before making a connection. This sort of frequency scheme has never been tried before. Rather than sharing spectrum, like WiFi, the unlicensed user will be only allowed to use the spectrum when there is no interference. As you can imagine the licensed cellular companies, which just spent billions for this spectrum are worried about interference. But there are also concerns by churches, city halls and musicians who use wireless microphones.

It seems unlikely to me that in an urban area with a lot of usage on the spectrum that unlicensed white space spectrum is going to be very attractive. If it’s hard to make or maintain an unlicensed connection then nobody is going to try to use the spectrum in a crowded-spectrum environment.

The question that has yet to be answered is if this kind of frequency plan will work in rural environments. There have been a few trials of this spectrum over the past five years, but those tests really proved the viability of the spectrum for providing broadband and did not test the databases or the interference issue in a busy spectrum environnment. We’ll have to see what happens in rural America once the cellular companies start using the spectrum they just purchased. Because of the great distances in which the spectrum is viable, I can imagine a scenario where the use of licensed white space in a county seat might make it hard to use the spectrum in adjoining rural areas.

And like any new spectrum, there is a chicken and egg situation with the wireless equipment manufacturers. They are not likely to commit to making huge amounts of equipment, which would make this affordable, until they know that this is really going to work in rural areas. And we might not know if this is going to work in rural areas until there have been mass deployments. This same dilemma largely sunk the use fifteen years ago of the LMDS and the MMDS spectrums.

The white space spectrum has huge potential. One channel can deliver 30 Mbps to the horizon on a point-to-point basis. But there is no guarantee that the unlicensed use of the spectrum is going to work well under the frequency plan the FCC is proposing.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

Universal Internet Access

navigator_globe_lgWhile many of us are spending a lot of time trying to find a broadband solution for the unserved and underserved homes in the US, companies like Facebook, Google, and Microsoft are looking at ways of bringing some sort of broadband to everybody in the world.

Mark Zuckerberg of Facebook spoke to the United Nations this past week and talked about the need to bring Internet access to the five billion people on the planet that do not have it. He says that bringing Internet access to people is the most immediate way to help lift people out of abject poverty.

And one has to think he is right. Even very basic Internet access, which is what he and those other companies are trying to supply, will bring those billions into contact with the rest of the world. It’s hard to imagine how much untapped human talent resides in those many billions and access to the Internet can let the brightest of them contribute to the betterment of their communities and of mankind.

But on a more basic level, Internet access brings basic needs to poor communities. It opens up ecommerce and ebanking and other fundamental ways for people to become engaged in ways of making a living beyond a scratch existence. It opens up communities to educational opportunities, often for the first time. There are numerous stories already of rural communities around the world that have been transformed by access to the Internet.

One has to remember that the kind of access Zuckerberg is talking about is not the same as what we have in the developed countries. Here we are racing towards gigabit networks on fiber, while in these new places the connections are likely to be slow connections almost entirely via cheap smartphones. But you have to start somewhere.

Of course, there is also a bit of entrepreneurial competition going on here since each of these large corporations wants to be the face of the Internet for all of these new billions of potential customers. And so we see each of them taking different tactics and using different technologies to bring broadband to remote places.

Ultimately, the early broadband solutions brought to these new places will have to be replaced with some real infrastructure. As any population accepts Internet access they will quickly exhaust any limited broadband connection from a balloon, airplane, or satellite. And so there will come a clamor over time for the governments around the world to start building backbone fiber networks to get real broadband into the country and the region. I’ve talked to consultants who work with African nations and it is the lack of this basic fiber infrastructure that is one of the biggest limitations on getting adequate broadband to remote parts of the world.

And so hopefully this early work to bring some connectivity to remote places will be followed up with a program to bring more permanent broadband infrastructure to the places that need it. It’s possible that the need for broadband is going to soon be ranked right after food, water, and shelter as a necessity for a community. I would expect the people of the world to expect, and to then push their governments into making broadband a priority. I don’t even know how well we’ll do to get fiber to each region of our own country, and so the poorer parts of the world face a monumental task over the coming decades to satisfy the desire for connectivity. But when people want something badly enough they generally find a way to get what they want, and so I think we are only a few years away from a time when most of the people on the planet will be clamoring for good Internet access.

 

Should the FCC Regulate OTT Video?

FCC_New_LogoA funny thing happened on the way to make it easier for OTT video providers to get content. Some of the biggest potential providers of online content like Amazon, Apple, and Microsoft have told the FCC that they don’t think that online video companies ought to be regulated as cable companies.

Of course, these couple of large companies don’t represent everybody who is interested in providing online video, and so they are just another faction to deal with for the issue. For example, FilmOn X recently got a court order allowing them to buy video as a regulated video provider and in the past Aereo had asked for the same thing.

A lot of the issue boils down to companies that want to put local networks online or else deliver them in some non-traditional way as was being done by FilmOnX or Aereo. These kind of providers are seeking to get the ability to force the local network stations to negotiate local retransmission agreements with them. Under current law the stations are not required to do so and are, in fact, refusing to do so.

The FCC is in a tough spot here because they don’t have a full quiver of tools at their disposal. The FCC’s hands are very much tied by the various sets of cable laws that have been passed by Congress over the years – the rules that define who is and is not a cable company, and more importantly, the rules and obligations of being a cable company. It will be interesting to see how much the FCC thinks it can stretch those rules to fit the situation of online programming, which was never anticipated in the rules.

I can certainly understand why the large companies mentioned above don’t want to be cable companies, because there are pages and pages of rules about what that means; the FCC is unlikely to be able to grant a company just a few of those rules without also requiring ones that these companies don’t want.

For example, the current cable law defines required tiers of service. Cable companies must have at least a basic and an expanded basic tier, and those are very narrowly defined. A basic tier includes all of the ‘must-carry’ local networks and the expanded basic carries all of the things we think of as cable channels.

I think what the FCC has in mind is a set of rules that require programmers to negotiate in good faith with online companies that want to buy their content. Certainly any company that wants to put content online today is completely at the mercy of programmers saying yes or no to giving them the content they want to carry. And there is nothing from stopping the programmers from changing their mind if they see an OTT company being more successful than they like.

So I would think that even Amazon, Apple, and Microsoft would like the ability to force the programmers to negotiate with them, but they obviously don’t want other FCC rules that they think will come along with that ability. Of course, these are very large companies with deep pockets and one has to imagine that they get a fairly decent hearing when they talk to programmers. The FCC’s real concern is not these giant companies, but companies smaller than them who don’t have any ability to force the programmers to even talk to them. I think the FCC believes that if online content is to be successful that there ought to be widespread competition and innovation online, not just content provided by a few giant tech companies along with other huge companies like Verizon.

Today the programmers have most of the power in the industry. They are making a huge amount of money from the mega-subscription models where all of their content is forced upon US cable companies. And they have no reason to become more reasonable because most of them are seeing gigantic growth in selling content overseas, so they have no real reason to upset the cart in the US market.

If online content is to become a vibrant alternative and not just be expensive packages foisted on the public by a small group of huge corporations, then something has to change. I just don’t know how much the FCC can do realistically considering how they are hamstrung by the current cable laws.

Is the Universal Translator Right Around the Corner?

star trek comm badgeWe all love a race. There is something about seeing somebody strive to win that gets our blood stirring. But there is one big race going on now that it’s likely you never heard of, which is the race to develop deep learning.

Deep learning is a specialized field of Artificial Intelligence research that looks to teach computers to learn by structuring them to mimic the neurons in the neocortex, that portion of our brain that does all of the thinking. The field has been around for decades, with limited success, and has needed faster computers to make any real headway.

The race is between a few firms that are working to be the best in the field. Microsoft and Google have gone back and forth with public announcements of breakthroughs, while other companies like Facebook and China’s Baidu are keeping their results quieter. It’s definitely a race, because breakthroughs are always compared to the other competitors.

The current public race deals with pattern recognition. The various teams are trying to get a computer to identify various objects in a defined data set of millions of pictures. In September Google announced that it had the best results on this test and just this month Microsoft said their computers beat not only Google, but did better than what people can do on the test.

All of the companies involved readily admit that their results are still far below what a human can do naturally in the real world, but they have made huge strides. One of the best known demonstrations was done last summer by Google who had their computer look at over 10 million YouTube videos and asked it to identify cats. Their computer did twice as good as any previous test, which was particularly impressive since the Google team had not pre-defined what a cat was to the computer ahead of time.

There are some deep learning techniques in IBM’s Watson computer that beat the best champs in Jeopardy. Watson is currently being groomed to help doctors make diagnoses, particularly in the third world where there is a huge lack of doctors. IBM has also started selling time on the machine to anybody and there is no telling all of the ways it is now being used.

Probably the most interesting current research is in teaching computers to learn on their own. This is done today by enabling multiple levels of ‘neurons’. The first layer learns the basic concept, like recognizing somebody speaking the letter S. Several first-layer inputs are fed to the second layer of neurons which can then recognize more complex patterns. This process is repeated until the computer is able to recognize complex sounds.

The computers being used for this research are already getting impressive. The Google computer that did well learning to recognize cats had a billion connections. This computer was 70% better at recognizing objects than any prior computer. For now, the breakthroughs in the field are being accomplished by applying brute computing force and the cat-test computer used over 16,000 computer processors, something that only a company like Google or Microsoft has available. .

Computer scientists all agree that we are probably still a few decades away from a time when computers can actually learn and think on their own. We need a few more turns of Moore’s Law for the speed of computers to increase and the size of the processors to decrease. But that does not mean that there are not a lot of current real life applications that can benefit from the current generation of deep learning computers.

There are real-world benefits of the research today. For instance, Google has used this research to improve the speech recognition in Android smartphones. But what is even more exciting is where this research is headed for the future. Sergey Brin says that his ultimate goal is to build a benign version of HAL from 2001: A Space Odyssey. It’s likely to take multiple approaches in addition to deep learning to get to such a computer.

But long before a HAL-like computer we could have some very useful real-world applications from deep learning. For instance, computers could monitor complex machines like electric generators and predict problems before they occur. They could be used to monitor traffic patterns to change traffic lights in real time to eliminate traffic jams. They could be used to enable self-driving cars. They could produce a universal translator that will let people with different languages converse in real-time. In fact, in October 2014, Microsoft researcher Rick Rashid gave a lecture in China. The deep learning computer transcribed his spoken lecture into written text with a 7% error rate. It then translated it into Chinese and spoke to the crowd while simulating his voice. It seems like with deep learning we are not far away from having that universal translator promised to us by science fiction.

Who Will Own the Internet of Things?

Tribrid_CarYesterday’s blog talked about the current Internet that is falling under the control of a handful of large corporations – Apple, Amazon, Facebook, Google and Microsoft. This leads me to ask if the upcoming Internet of Things is also going to be owned by a handful of companies

This is not an idle question because it has become clear lately that you don’t necessarily own a connected device even though you might pay for it. As an example, there was recently an article in the New York Times that reported that a car company was able to disable cars for which the owners were late in making payments. The idea of Ford or General Motors still having access to the brains of your vehicle even after you buy it is unsettling. It’s even more unsettling to think access is in the hands of somebody at your local car dealer. Imagine them turning off your car when you are far away from home or when you have a car full of kids. But even far worse to me is that if somebody can turn off your car then somebody else can hack it

The car companies are able to do this because they maintain access to the root directory of your car’s computer system. Whether you financed the car with them or paid cash, they still maintain a backdoor that lets them get remotely into your car’s computer. They might use this backdoor to disable the vehicle as in this example or to download software upgrades. But the fact is, as long as they have that ability, then to some degree they still have some control over your car and you. You have to ask if you truly own your own car. As an aside, most people don’t realize that almost all cars today also contain a black box, much like the recorder in airplanes that records a lot of data about your car and your specific driving habits. It records data on how fast you drive or if you are wearing your seatbelt – and this data is available to the car companies

Perhaps the car is an extreme example because car is probably the most complicated device that you own. But it’s likely that every IoT device is going to have the same backdoor access to the root directory. This means that the company that made an IoT device is going to have a way to gain access. This means every smartphone, appliance, thermostat, door lock, burglar alarm and security camera can be controlled to some degree by somebody else. It makes you seriously ask the question if you entirely own any smart device

Over time it is likely that the IoT industry will consolidate and that there will be a handful of companies that control the vast majority of IoT devices just like the big five companies control a lot of the Internet. And it might even be the same companies. Certainly Apple, Google and Microsoft are all making a big play for the IoT

I’ve written before about the lack of security in a most IoT devices. My prediction is that it’s going to take a few spectacular failures and security breaches of IoT devices before the companies that make them pay real attention to security. But even should they tighten up every security breach, if Google or Apple maintains backdoor access to your devices, then they are not truly secure

I think that eventually there will be a market for devices that a buyer con control and that don’t keep backdoor access. It certainly would be possible to set up an IoT network that doesn’t communicate outside the home but where devices all report to a master controller within the home. But it’s going to take people asking for such devices to create the market for them

If people are happy to have Apple or Google spy on them in their homes then those companies will be glad to do it. One of the first things that crossed my mind when Google bought Nest was that Google was going to be able to start tracking a lot of behavior about people inside their homes. They will know when you wake and sleep and how you move around the home. That may not sound important to you, but every smart device you add to your house will report something else about you. With the way that the big companies mine big data, the more they know about you the better they can profile you and the easier it is for them to sell to you. I don’t really want Google to know my sleep habits and when I go to the bathroom. To be truthful, it sounds creepy.

Do the Cloud Guys Get It?

English: Cloud Computing Image

English: Cloud Computing Image (Photo credit: Wikipedia)

I just read an article this week that cites five reasons why cloud computing isn’t taking off as fast as the companies selling the solution were hoping for. The reasons unfortunately make me feel like the cloud industry folks are out of touch with the real world. This is not an uncommon phenomenon in that high-tech industries are run by innovators. Innovators often don’t understand why the rest of the world doesn’t see things with the same clarity as they do.

Following are the five reasons cited in the article about why cloud computing is not selling as fast as hoped, with my observations after each point.

The Organization. Organizations often are structured in a way that does not make the kind of shift to cloud easy. For instance, IT shops are often organized into separate groups for compute, network and storage.

Changes that affect people are never easy for companies. Going to the cloud is supposed to save a lot of labor costs for larger companies, but that is not necessarily the case for smaller companies.  But even larger companies are going to take a while to make sure they are not walking off a cliff. Every old-timer like me remembers a few examples of where major technology conversions went poorly, and nobody wants to be the one blamed if a big conversion goes wrong.

Security. Companies are afraid that the cloud is not going to be as safe as keeping all of their data in-house.

Everything I have read says that if done right that the cloud can be very secure. However, the fear is that not every conversion is going to be done right. You can place your bets with me now, but sometime in the next year or two there is going to be a major ugly headline about a company that converted to the cloud poorly which led to a major breach of customer records. The problem is that everybody is human and not every cloud company is going to do every conversion perfectly.

Legacy Applications. Cloud companies want you to get rid of legacy systems and upgrade to applications made for the cloud.

This is where cloud companies just don’t get it. First, almost every company uses a few legacy systems that are not upgradable and for which there is no cloud equivalent. Every industry has some quirky homegrown programs and applications that are important for their core business. When you tell a company to kill every legacy application most of them are going to rightfully be scared this is going to create more problems than it solves.

Second, nobody wants to be automatically upgraded with the latest and greatest software. It’s a company nightmare to come in on a Monday and find out that the cloud provider has upgraded everybody to some new Microsoft version of Office that is full of bugs and that everybody hates and that brings productivity to a halt. Companies keep legacy systems because they work. I recently wrote about the huge number of computers still running on Windows XP. That is how the real world works.

Legacy Processes. In addition to legacy software, companies have many legacy processes that they don’t want to change.

Honestly this is arrogant. Companies buy software to make what they do easier. To think that you need to change all of your processes to match the software is really amazingly out of touch with what most companies are looking for. Where a cloud salesman sees ‘legacy system’ most companies see something that works well and that they took years to get the way they want it.

Regulatory Compliance. Companies are worried that the cloud is going to violate regulatory requirements. This is especially true for industries such as financial, health and the power industries.  

This is obviously a case-by-case issue, but if you are in one of the heavily regulated industries then this has to be a significant concern.

I hope this doesn’t make me sound anti-cloud, because I am not. But I completely understand why many companies are going to take their time considering this kind of huge change. There is no product ever made that should not be taking their customers into consideration. When I see articles like this I feel annoyed, because the gist of the article is, “Why won’t these dumb customers see that what I have is good for them”. That is never a good way to get people to buy what you are selling.

Hello Siri . . .

Image representing Siri as depicted in CrunchBase

Image by None via CrunchBase

Gartner, a leading research firm, issued a list of the ten top strategic technology trends for 2014. By strategic they mean that these are developments that are getting a lot of attention and development in the industry, not necessarily that these developments will come to full fruition in 2014. One of the items on the list was ‘smart machines’ and under that category they included self-driving cars, smart advisors like IBM’s Watson and advanced global industrial systems, which are automated factories.

But I want to look at the other item on their list which is contextually aware intelligent personal assistants. This essentially will be Apple’s Siri on steroids. This is expected to be done at first mostly using cell phones or other mobile device. Eventually one would think that this will migrate towards something like Google Glass, a smart phone, a bracelet or some other way to have this always on you.

Probably the key part of the descriptive phrase is contextual. To be useful, a person’s personal assistant has to learn and understand the way they talk and live in order to become completely personalized to them. By contextual, the current Siri needs to grow to learn things by observation. To be the life-changing assistant envisioned by Gartner is going to require software that can learn to anticipate what you want. For example, as you are talking to a certain person your assistant ought to be able to pick out of the conversation those bits and pieces that you are going to want it to remember. For example, somebody may tell you their favorite restaurant or favorite beer and you would want your assistant to remember that without you telling it to do so.

Both Apple and Microsoft’s current personal assistants have already taken the first big step in the process in that they are able to converse some in conversation language mode. Compare what today’s assistants can already do to Google’s search engine, which makes you type in awkward phrases. Any assistant is going to have to be able to be completely fluent in a person’s language.

One can easily envision a personal assistant for life that helps you learn when you are young and who then sticks with you for life. Such an assistant will literally become the most important ‘person’ in somebody’s life. An effective assistant can free a person from many of the mundane tasks of life. You will never get lost, have to make an appointment, remember somebody’s birthday or do many of the routine things that are part of life today. A good assistant will free you from the mundane. But it still won’t take out the trash, although it can have your house-bot do that.

In the future you can envision this assistant tied into the Internet of things so it would be the one device you give orders to. It would then translate and talk to all of your other systems. It would talk to your smart house, talk to your self-driving car, talk to the system that is monitoring your health, etc.

The biggest issue with this kind of personal assistant is going to be privacy. A true life-assistant is going to know every good and bad thing about you, including your health problems and every one of your ugly bad habits. It is going to be essential that this kind of system stay completely private and be somehow immune to hacking. Nobody can trust an assistant in their life that others can hack or peer into.

One might think that this is something on the distant horizon, but there are many industry experts who think this is probably the first thing on the smart machine list that will come to pass, and that there will be pretty decent versions of this within the next decade. Siri is already a great first step, although often completely maddening. But as this kind of software improves it is not hard to picture this becoming something that you can’t live without. It will be a big transition for older people, but our children will take to this intuitively.

Old is Not Necessarily Dead

Microsoft Windows XP wordmark official.

Microsoft Windows XP wordmark official. (Photo credit: Wikipedia)

For some reason, last month was a time when I kept running into legacy systems over and over. And by legacy systems I am talking about older technology platforms that everybody assumes are little used or dead and gone.

First, I ran into two separate CPA firms that are still using PCs with Windows XP. They said they are using it because the programs they use don’t require a higher version of Windows, and the XP platform is stable and trouble-free. As it turn out this is a fairly common opinion in the corporate world. It is estimated that one-third of worldwide computers, or 500 million computers still run XP. And this is a 13-year old operating system.

It turns out that Microsoft is going to officially stop supporting XP in April 2014, and that will drive a lot of corporations to upgrade to something newer. But many smaller firms (like these CPA firms) will choose to not upgrade and they will continue to run it without the Microsoft backstop. Their reasoning is that hackers are no longer concentrating on the older operating systems and the platforms will actually get safer over time as fewer and fewer people use them. And let’s face it, upgrading a Windows platform at a company is a lot more of a pain in the butt than doing it at home. I know I have spent a whole day before making my machine work right after an upgrade, and figure that same effort times many machines in an office.

I also ran into XP when we started doing number portability and the NPAC system that everybody uses for number porting is also still on Windows XP.

But then I ran into something even older. One of my clients has recently started using MS-DOS as the software to control external access to his server. He has it set up so that somebody gets only three tries to log in and then the operating system shuts down. He thinks this is hacker-free since most LANs are hacked by programs that try millions of password combinations to get into a system. Many of you reading this are not going to remember the pleasure of turning on your computer and being greeted by a C prompt.

There are other legacy applications that are more telephone related. For example, I know a company who offers a very vanilla voice service where every customer gets a basic line and all of the features in the feature set. The cheapest way they could figure out to do this was to buy an old legacy TDM switch. They picked it up used for almost nothing including a big pile of spares. Since they aren’t trying to do anything unusual it’s easy to provision and it just hums along.

I have a lot of clients who just ditched legacy systems over the last decade. But the reason they ditched these switches was not because they didn’t work, but rather because the maintenance fees charged by the switch vendors was too high. But if you buy these same switches on the gray market you have zero vendor maintenance costs and operating the switch becomes a very different economical proposition.

As someone who is getting a little gray around my own edges I take an odd pleasure in knowing that people are finding uses for things that were used decades ago. I know I am nowhere near to obsolete and it makes me smile to see the value in older but still great technology.