Is Peering the End of Network Neutrality?

Network_neutrality_poster_symbolNetFlix and Comcast have announced a deal whereby NetFlix will pay to peer into the Comcast network. Numerous articles popped up yesterday talking about how this is the end of network neutrality. But I am not so sure about that. In order to understand this, let me talk a bit about how peering works today. Peering is when two networks decide to make a direct connection between the networks rather than connecting in a more traditional way through the open Internet.

There are two kinds of connections that are typically made. One is local peering. This is when two networks who are geographically close decide to exchange data traffic. This typically benefits both parties. Let’s look at an example of why. Let’s assume the two parties are medium sized carriers, one a telephone company and the other a cable company that are competing in the same community. There is always a considerable amount of Internet traffic that is conducted within a community. People browse the websites of stores in their own community. People do on-line banking with local banks. People work at home and want to get data into and out of their employer’s local networks.

Normally each of these carriers would deliver traffic between their two networks, say between a customer on one network and a bank on the other one by sending this traffic to the open Internet. Each company will have a connection to the Internet, through some wholesale provider that will terminate eventually at one of the major Internet pops like Chicago or Dallas. And so when a customer wants to connect with his bank, the data will travel out through the first network to the major pop where it will be handed off to the data stream going back to the second network.

Such a connection is said to make at least several hops, meaning the times that the message is handled by a data router somewhere in the network in order to figure out where it is going. The more hops, the slower the connection. But local peering solves this problem because the traffic can be exchanged locally and goes straight from one carrier to the other without being sent first to some distant POP. This is a simplistic description because peering arrangements are usually more complicated than this. They are more likely to be between the underlying transport carriers that handles the traffic for the telephone company and the cable company. But peering will make the connection more direct than it would be under normal network circumstances.

The other kind of peering is one that saves money. I have many clients who peer with Google because Google and all of its various subsidiaries accounts for a significant percentage of the traffic on any Internet connection. My clients have done the math and see that it is cheaper to make a direct connection with Google rather than paying their underlying carrier to get it to Google. Anybody who peers with Google this way must pay out of their own pockets to get to a Google POP, probably including paying for the equipment at the POP needed to make the connection. But this kind of peering often results in a significant savings. Most people’s connection with Google is very much one-directional. There is usually a lot more traffic coming from Google than going to Google.

We don’t have the details of the Comcast / NetFlix deal to be certain what the arrangement is. But up until now it’s clear that the two sides have not agreed to a direct peering arrangement. One has to assume that the connection from NetFlix is nearly all in one direction – to download video to customers who sit on the Comcast network. Without a direct peering arrangement the traffic must get to Comcast through intermediate carriers and often would be routed in ways that would slow up the traffic, as is any traffic on the open Internet.

I would assume that there is not one big Comcast network, but instead there are pockets of Comcast all over the country. I would assume that for NetFlix to fully peer with Comcast that they are going to have to make connections with these various pockets, all at NetFlix cost. And if this was normal peering, NetFlix would also be expected to pay for the connections into the Comcast network including owning or somehow paying for the large amount of equipment needed to terminate their traffic.

Again, the two sides aren’t talking about the details. But I would expect it to cost NetFlix something to get their traffic directly to all parts of the Comcast network. That is how normal peering works. Where the line of network neutrality will have been crossed is if NetFlix has to pay a lot more for this connection than what others pay. But since this deal has been under negotiations for a year, one has to assume that both parties had the old network neutrality rules in mine as it was negotiated. I can certainly envision an arrangement that is more like normal peering than of a big violation of the principles of network neutrality. If it was the latter I would expect NetFlix to be putting up a big stink. Network neutrality benefits companies like NetFlix tremendously, and if they aren’t complaining then there is a good chance that this is peering like normal and not a giant money grab by Comcast.

The Evolution of Cellular

Transmitter_tower_in_SpainThere are several big changes on the horizon that are going to really impact cellular networks. One change is transformational, one solves some local network issues and the third, and probably the least important one will get all of the press.

The transformational change is that the technology is being developed that will allow the industry to centralize the brains and the computing functions of the network. Today there are nearly 200,000 cell phone towers in the US and each cell phone tower requires a full set of switching electronics. Much of the smarts of the cellular network is done at these cell sites. That makes the cellular network somewhat unique in that most other types of networks have been able to centralize the brains and computing power of the network into hubs rather than to leave everything on the edge.

There are several groups now working on ways to start migrating the brains of cell sites back to regional data centers. Some people have called this moving the cellular network into the cloud, but that is really not a great description of it. Rather, this is a migration of computing and processing power from the edge back to a core like has happened with all other kinds of networks. The cable industry called this migration ‘headend consolidation’ where they created huge headend that can serve millions of customers.

This will be a transformational change because today it costs a literal fortune for a cellular carrier to implement a technology upgrade since they have so many cell sites. And this matters because upgrades are hitting the industry at a fast and furious pace. With a centralized cellular network, a cell company could upgrade the core software and electronics at only a few hubs since the cell sites will become little more than a transmitter site with little brains.

The second big change is that in 2014 we are seeing the cell industry adding tens of thousands of small cell sites. For a few years there have been network extenders called femtocells, but now the vendors in the industry have developed mini cell sites that are not a whole lot more than a cell site on a card. These small sites don’t have the same power as a full cell site, but they can be placed in areas where there is currently network congestion.

These small cell sites can be deployed in stadiums, downtown districts, convention centers and commuting corridors to provide extra call and data capacity where it is most needed. For example, I have a friend who I talk to regularly during his morning commute and I always lose him when he is crossing the 14th Street Bridge into DC. These mini cell sites ought to be able to fix the holes and dead spaces in the existing cellular network.

Finally is the change that will get all of the hype. Rumor has it that one or more of the cellular companies are going to start talking about 5G cellular networks this year. As I have discussed in the past, there are not even any networks today that are close to being able to call themselves 4G networks. The 4G standard begin with ability for a cell site to deliver 1 gigabit data speeds and there aren’t any sites today who can do 1/20th of that speed.

Sprint and T-Mobile coined the word 4G to promote some incremental enhancements to their HSPA+ and LTE networks. And so 4G was used as a marketing phrase to try to distinguish their technology from the competition. Then the whole industry followed suit and we now have 3G and 4G phones which use the same networks and have essentially the same capabilities.

There are dozens of little improvements to cellular technology being developed in vendor labs, and every time there is a new little tweak that makes speeds a little faster or that somehow enhances the customers experience the cell companies have been itching to say that they now have 5G. And it will happen. One of them will pull the trigger as marketing hype and the rest will follow. Ironically, by the time we finally get real 4G technology we will probably be selling phones in the market labeled as 10G.

So while the marketers make a lot of hype out of little changes in the network, the really huge change is the possibility to centralize the networks into hubs. Once that is done, a company could upgrade a few hubs and introduce a new technology improvement overnight. But that doesn’t sound sexy and is hard to market, so it will just quietly get implemented in the background.

The Rich Get Richer . . .

GooglelogoJust a few days ago I wrote about the new digital divide. That being the fact that larger and more prosperous places have, or are getting faster broadband while smaller and poorer places are being left behind.

And on the heels of that blog, Google just announced that it has invited talks with 34 new cities to discuss the expansion of its gigabit network. And of course, these are all big places and/or prosperous and growing places including Phoenix, Scottsdale and Tempe in Arizona, Atlanta and surrounding suburbs in Georgia, San Antonio in Texas, Raleigh-Durham, Charlotte and surrounding suburbs in North Carolina, Nashville in Tennessee, San Jose and other growing areas in northern California, Salt Lake City and Portland.

This is great news for those communities. There is certainly no assurance that any of them are going to get fiber and Google will be looking for the places willing to give the biggest handouts. But one would think that a decent number of the cities on that list will be able to give Google what they want to get a fiber network.

But not on this list, as you would expect are smaller towns and counties or the inner cities in the east that were ignored by Verizon FiOS. For the most part the Google list represents communities that are relatively economically healthy. The cities on the list are the ones that are growing while much of the rest of the country, like the northeast and smaller towns are shrinking.

In this same week the FCC said that they are going to look at eliminating the state barriers that stop municipalities from building fiber networks. There are over twenty states that have either a total ban or severe restrictions on government entities getting into the fiber business.

Let’s face facts. If you are not one of those places that are thriving, like the places on Google’s new list, then the chances are big that nobody is even thinking about building fiber in your neighborhood. You might live close to an independent telephone company or cooperative that is thinking about it. But most of rural America is not on anybody’s radar.

I always tell rural communities to consider two steps. First, you need to look around just to make sure that there is no company nearby who can be enticed to bring you fiber. Because sometimes, with the right incentives there is somebody. But generally there is nobody willing to make such an investment, so the second part of the advice is, if you want fiber you are going to have to step up and build it yourself.

You may need to gather surrounding communities together to get a pile of households large enough to justify a fiber business plan. But your community needs to take the initiative to get fiber or you are going to be left far behind.

Some of the communities Google is targeting were edging towards the wrong side of the new digital divide. I just read this morning that a large portion of Salt Lake City, as an example, still has 3 Mbps DSL for broadband. But they are large enough and thriving enough to have gotten Google’s attention, and good for them. But if you are a rural county seat or a farming community you are not going to get on Google’s or anybody else’s list.

The New Digital Divide

The InternetThere was a time, not very many years ago, when the digital divide meant the difference between pockets of people that had dial-up versus places that had something faster. But this is no longer a good definition and I think the digital divide is growing very quickly and is a huge issue again. The new digital divide is between cities and suburbs that have relatively fast broadband and rural areas and urban pockets that have been left a few generations of technology behind. Below when I say rural areas we can’t forget that there are many parts of inner cities in the same condition and that have become broadband deserts.

Today, most of rural America is several generations of technology behind the cities and there is no real expectation that this gap will ever close. A large portion of rural America is served by DOCSIS 2.0 cable modems and first generation DSL. These technologies are delivering anything from 1 Mbps up to maybe 5 Mbps to the average home and business in these communities. The incumbent carriers claim these areas are served by broadband, and they are always careful to claim that these communities have advertised speeds that are about the paltry 4 Mbps used by the FCC to define broadband.

But every community in this situation has now fallen on the wrong side of the new digital divide. The large telcos and cable companies are making big investments in the metropolitan areas. There are numerous affluent parts of the country that have broadband between 50 Mbps and 100 Mbps download if people are willing to pay a premium price. But in these markets even the slower cable modem products are already between 20 – 30 Mbps.

And I am not talking only about place where Verizon has built FiOS. The larger cable companies have upgraded to DOCSIS 3.0 in many large markets and now have fast speeds. AT&T has launched U-Verse using bonded pair DSL in many of these same markets with speeds of around 40 Mbps. And we are on the verge of AT&T and other copper providers having G.Fast which is going to increase speeds on copper to as much as several hundred Mbps. Even the cellular carriers have stepped up their game in the cities, and the latest version of 3.5 G is delivering speeds of 40 Mbps to 50 Mbps in short bursts.

But these new technology upgrades are not being brought to rural America and are unlikely to be brought there. The incumbent cable companies and telcos installed the current technology over a decade ago and have not upgraded it since. Meanwhile there has been several upgrades in the areas with good broadband.

The incumbents are not willing to make the needed upgrade investments in small markets. They aren’t going to get the same kind of returns they can make for the same investment in a big suburb. They have largely ignored the small markets for years and the wires are in bad shape compared to bigger markets. So I think we now on the verge of a permanent new digital divide defined by areas that keep getting new technology upgrades and areas that will be stuck in the past. And the gulf between these two areas is only going to grow.

There are real life repercussions of this gap. Homes on the wrong side of the digital divide can’t use broadband in their homes the same way that people in a City can. But much more importantly, businesses can’t get the same bandwidth that their competitors in the City have. In the long run this is going to squelch innovation in the rural areas. Areas on the wrong side of the digital divide are going to have a really hard time creating jobs that will let their kids stay in the area. The biggest fear in rural communities is that they are going to become economically irrelevant. They won’t be able to create jobs or keep jobs, their kids will move away and over a few decades the communities will die.

Cellular is Not the Rural Broadband Solution

Cell-TowerI’m often asked why we can’t let cellular 4G bandwidth take care of the bandwidth needs for rural America. When you look at the ads on TV by Verizon and AT&T you would assume that the cellular data network is robust and is being built everywhere. But there are a lot of practical reasons why cellular data is not the answer for rural broadband:

Rural areas are not being upgraded. The carriers don’t make the same kinds of investments in rural markets that they do in urban markets. To see a similar situation in a related industry, consider how the large cable companies are upgrading cable modems in the metropolitan areas years before they upgrade rural areas. It seems that urban cellular technology is being upgraded every few years while rural cell sites might get upgraded once a decade.

Rural networks are not built where people live. Even where the cellular networks have been upgraded, rural cellular towers have been historically built to take care of car traffic, referred to in the industry as roaming traffic. Think about where you always see cellular towers and they are either on the top of tall hills or else along a highway not close to many homes and businesses. This matters because like all wireless traffic, the data speeds drop drastically with distance from the tower. Where a 3G customer in a City might get 30 Mbps download speed because they are likely less than a mile from a transmitter, a customer who is 4 miles from a tower might now get 5 Mbps. And in a rural area 4 miles is not very far.  

The carriers have severe data plans and caps. Even when customers happen to live close to a rural transmitter and can get good data speeds, the data plans for the large carriers are capped at very skimpy levels. One HD movie uses around 1.5 gigabits, meaning that a cap of 2 to 4 gigabits is a poor substitute for landline broadband. There are still a few unlimited data plans around but they are hard to get and dwindling in availability. And it’s been widely reported that once a customer reaches a certain level of usage on an unlimited plan that the speeds are choked to go very slow for the rest of the month.

Voice gets a big priority on the network. Cellular networks were built to deliver vice calls to cell phones and voice calls still get a priority on the network. A cell phone tower is limited to a finite amount of bandwidth. And so, once a few customers are downloading something big at the same time, the performance for the rest of the cell site gets noticeably worse. 3G networks are intended to deliver short bursts of fast data, such as when a cell phone user downloads an app. But there is not enough bandwidth at a cell phone tower to support hundreds of ‘normal’ data customers who are watching streaming video and using bandwidth like we use in our homes and businesses.

The plans are really expensive. Cellular data plans are not cheap. For example, Verizon will sell you a data plan for an iPad at $30 per month and a 4 gigabit total usage cap. Additional gigabits cost $10 to $15 each. To get the same plan for an iPhone is $70 per month since the plan requires voice and text messaging. Cellular data is the most expensive bandwidth in a country that already has some of the most expensive bandwidth in the world. 

There are no real 4G deployments yet. While the carriers are all touting 4G wireless, what they are delivering is 3G wireless. By definition the 4G wireless specification allows for gigabit data download speeds. What we now have, in engineering terms can best be described as 3.5 G and real 4G is still sometime in the future. There are reports of current cellular networks in cities getting bursts of speed up to 50 Mbps, which is very good, but is not close to being 4G. But most realized speeds are considerably slower than that.

Killing Municipal Broadband in Kansas

Triticum_durumThere is a bill in committee in the Kansas Senate that would basically prohibit any municipality from building a broadband network that would bring retail broadband, voice or cable TV to any customer. Kansas SB 304 is attached. If enacted this would add Kansas to the list of many other states that prohibit any form of municipal competition.

I have to declare some bias in the position that I take on this topic due to the fact that I work for a number of municipalities that have built or are thinking of building fiber networks. But I also work for a lot of commercial firms that build broadband networks, and my real bias is against having large parts of our country without adequate broadband. It is my opinion that every part of the country ought to have broadband and I think whoever is willing to step up and make an investment ought to be allowed to do so.

I can tell you from my experience in working with municipalities that decide to get into the broadband business that they feel like they have no other choice. Many rural parts of America are on the wrong side of the digital divide and it’s getting worse all of the time. The large cities are finally getting good broadband and in most metropolitan areas customers can buy broadband speeds today of 50 – 100 Mbps download.

There are still a lot of people on farms who can still only get dial-up or satellite Internet, both which are no broadband at all. But that is not what defines the digital divide any more. The real digital divide can be found in the thousands of towns and counties where the broadband speeds are 3 – 10 Mbps. Those speeds, which were probably okay five years ago, are no longer adequate. Any City that has 5 Mbps download is already on the losing end of the digital divide. With such Internet speeds they are unable to attract or keep businesses or people in their communities.

Small Cities are scared to death of becoming a place where nobody wants to live. Every community hopes for a future where their kids can find jobs somewhere nearby and stay a part of the community. Places on the wrong side of the digital divide can already see that all of their kids move off to find jobs elsewhere, and it’s getting worse all of the time.

A household with only 5 Mbps download is blocked from using the Internet in the same way as people in a metropolitan area. They can’t really do two things at once on such a connection. This means that one member of the family can’t be taking an on-line college course while another is browsing the Internet or watching a streaming TV show.

And businesses with a 5 Mbps connection are hamstrung, You certainly can’t do much if you share such a small pipe with a lot of computers. While this kind of speed might let a tiny retail business squeak by, companies that have multiple employees can’t function with inadequate broadband.

I can tell you small Cities mostly look at offering broadband out of well-founded fear. They always try to get the incumbent provider to offer better broadband before they even think about it. But the ugly reality is that rural markets served by the large national incumbents get the worst service and have the oldest and worst networks in the country. While the large cable companies and telcos have stepped up their game in metropolitan areas, they have ignored investing in rural areas for decades.

So laws like the Kansas one are nothing more than the large telcos and cable companies kicking sand in the face of small town America. They have already shown them that they are not willing to invest in those areas, but they still want to milk them for revenues and don’t want anybody else to help these areas

The IoT of Home Medical Care

Medical_Software_Logo,_by_Harry_GouvasIf you read my blog much you will know that I talk a lot about the Internet of Things, and that I often mention how the IoT is going to transform medicine. The reason for this is personal, not just to me, but to the whole generation of baby boomers. We are now 60ish and, while that is not yet old, we all can look into the future in a decade or two and see ourselves as old.

I think the biggest fear that a lot of us have is losing control of our lives and ending up in an institution. Many institutions are dehumanizing and even the best run ones are a far cry from staying in your own home. And so, to me, the part if the IoT that probably interests me the most is the technologies that are going to let people stay in their homes as long as possible. I don’t know about you, but if I had one wish to make with a genie it would be to live to a ripe old age with good health and then die in my own bed.

While the IoT is a relatively new thing, there has already been a lot of thought and research put into using technology to take care of the elderly. Let’s take a look at where some of this early research is headed.

Smart Motion Detectors. One brilliant idea is to install smart motion detectors around the home. Motion detectors can tell a lot about a person without being as intrusive as surveillance cameras. Motion detectors coupled with good software can learn an elderly person’s habits and can then send out an alert or an alarm if something seems amiss. This system ought to be able to tell if somebody has fallen or if they are unconscious and not moving and alert a caregiver if they won’t respond. At first this might create some false alarms when somebody is napping hard, but over time the system will get to know the patient and will know the difference between napping and a real trouble.

This does raise the issue of privacy. Most of the technologies on the horizon are going to compromise some privacy. It’s going to be up to each person to determine how much privacy they will trade for getting to stay in their own home, and I think for most people they will choose the monitoring over the alternative.

Health Monitors. I wrote recently about the Qualcomm Foundations$10 million XPrize to create a tricorder like the one in Star Trek. There are going to be small unobtrusive devices that can keep tabs on temperature, blood pressure, blood sugar and a number of other statistics that can let the patient be monitored for general health. This kind of monitoring is going to alert the health system that there is a problem before the patient even realizes it. This is taking preventative care to the next level.

Smart House. There are a lot of devises that can be incorporated into the smart house that can help the elderly. Probably the most useful will be the ability to talk to your house and tell it what you need. This means that everything from a call to 911 to making a room warmer are just a voice command away. But there are many other things a smart house can do. It can do things like remind a person when it’s time to take medication. It can remind the elderly to turn off the stove or to lock doors.

Robots. And finally, let’s not forget robots. There should be robots in a few years that can do a lot of the mundane tasks around the house like cleaning, taking out the trash, watering the plants, etc. that can be a real benefit to the elderly person living alone. And if it can play a mean hand of gin rummy, all the better!

Personal Privacy on the Internet

Monitor_padlockBecause of the NSA spying revelations and the constant news that the big web companies are building a profile of everybody in the country, privacy is a hot topic. It should be fairly obvious to anybody who uses the Internet that whatever you do on-line can be seen by somebody else. But this doesn’t mean that you don’t have some rights. So I started digging around to see just what rights we have as Internet users, and conversely what rights we don’t have. Here is what I found.

Your Personal Data is Really Not Yours. It’s a fairly common assumption that people own their own data. But if you give your personal data to a web site you no longer own that data. You gave it up voluntarily. Websites often make promises to not share that data with other companies, but it’s the extremely rare web company that doesn’t use your data for their own purposes.

I think this misunderstanding comes from the fact that every website has some sort of privacy disclosure and if you read through it quickly (as we all do, if we read these at all), you might get this notion. But all that these web sites promise you is that they will not violate any creative expression or content that you have provided to them. That is a protection provided by US privacy law and extends beyond the Internet. But since web sites rarely get any intellectual or artistic content from you that would be protected, they are free to use anything else you give them. Your name and the fact that you like potato chips is not protected content.

The reality is the opposite of what most people think and the same laws that protect any creative content you create also protect the contents of the databases created by the web companies. If anything, once you give them your information they have more rights to further use it than you do.

People Cannot Take Back Their Content. It’s another common misperception that you can ask a website to delete you and everything about you. But once you have voluntarily given out information about yourself, you have no right to recall it. Websites might allow you to take down a listing or page about you, but there is nothing that requires them to purge your information from their databases. In researching this I saw a very good summary of this point, which is to be very careful what you say on the web, because it is theoretically going to be out there forever.

You Don’t Have the Right to be Anonymous. Many people believe that they can maintain their privacy by creating a fictitious persona on the Internet. Obviously you can’t do this anywhere you shop or you would never get what you ordered. And it’s potentially unlawful to create a false persona on a social web site.

Sites like Facebook and Linked-In want to know who you really are and it is certainly a violation of their terms of service for you to be on these sites under a false persona. I saw an estimate recently by Facebook who thinks about 15% of their users are under false names. It’s certainly a benefit to Facebook to know who you are and so they are free to kick you off their site for supplying a false identity.

If you use a fictitious persona you are breaching the contract you sign with them when you sign up. While it can be argued that is breaking the law it is not likely that Facebook is ever going to go after somebody for this. However, you are violating several laws that are part of the U.S. Computer Fraud and Abuse Act and if you are ever found doing something else nefarious on your computer they could layer on these charges as well.

You Have No Basic Privacy Rights. People assume that they have some sort of privacy rights when dealing with sites like Facebook. But in fact, the privacy laws today are more for their protection than yours. Companies like Facebook are afforded broad free speech rights that lets them basically trample over your privacy. There are no constitutional or specific statutes that give the average consumer any rights on the Internet or on social media sites.

And thus, once you voluntarily log in and give up your information voluntarily these companies are within their rights to resell information about you to advertisers or to do pretty much anything else they want to do with it.

Web 3.0

WWW_balloonWeb 3.0 is the name that has been given to the next generation web. While not everybody agrees with the designations, web 1.0 was the first generation web where everything was flat web sites. With Web 1.0 we browsed website to see what other people wanted us to tell us.

We are now in Web 2.0 where users can interactively create content. Instead of just looking at web sites users now interact and create content on social networks like Facebook, Twitter and LinkedIn. YouTube has so much user generated content that it is one of the biggest traffic generators on the web. And web sites are no longer static and users can post our opinions on a newspaper article or create funny reviews on an Amazon product.

Web 3.0 is expected to go a step further and personalize the web experience. It is expected users will have a personal assistant that will learn their preferences and help them navigate the web. Apple’s Siri is one of the first generation of this type of assistant, but they are expected to soon advance far past Siri.

The biggest improvement of Web 3.0 is that it will understand context, which is lacking in Siri and today’s search engines like Google. But in the future if you tell your assistant that you want to buy a mouse, it will know from the context if you mean the computer device or the little furry animal. The real advantage of the ability to understand context is that search engines will get smarter and will bring you facts. Today the web searches on key words and brings you every web site that contains one of your search words. But in Web 3.0 it is expected that you can ask a question like, “What year was Abraham Lincoln elected?” and get the answer instead of a bunch of web sites about Lincoln, Nebraska.

A personal assistant will also make life easier. For instance, you can tell your assistant that you want to meet a friend for a birthday lunch and also buy them a present. You assistant will talk to your friend’s assistant behind the scene and find a restaurant that is convenient for both of you and that you both will like. And it will suggest presents to you, and once you choose one will buy it for you, have it gift wrapped and delivered to the restaurant. And all of this happens behind the scene with an assistant that understands context.

There will be more to Web 3.0 than just the personal assistant. As more brains get built into the web the way we use it can be smarter as well. As an example, Google just patented something they call geolocation technology. This, and tools like it are going to bring some aspects of artificial intelligence to your personal assistant. For example, with geolocation, advertisers will be able to make offers to you (really to your assistant) that are dependent upon your location. They might offer you a special on a meal, a drink or a purchase that is a few stores in front of you as you walk down the street. But your assistant will learn to filter such requests and will only bring to your attention the ones that are going to be of interest to you.

The personalized web is going to transform the web experience. You will finally be able to use the web to find the facts you want instantly. You will be able to use the web as your social secretary, or as your to-do list or in any other manner of your choosing.

Who Will Be the Cable Killer?

Cable OutletIt’s a given these days that people are dropping cable subscriptions in favor of other sources of content. For now the exodus from cable is a trickle, but as we have seen with other industries, things can change into a flood quickly if there is a widely-acceptable alternative to an older technology.

This leads me to speculate about what company might be the one to break the cable monopoly. My crystal ball is no better than anybody else’s and this is just speculation. But it is not purely a mental exercise, because the odds are that somebody is going to be the cable killer.

One can first look at the characteristics that any cable killer must have. Number one is that they are going to need to have access to large number of potential customers. Today there are only a handful of companies that can make such a claim, although we have seen that when something new comes along that a new industry entrant can attract millions of customers in a very short period of time. The cable industry has a handful of large providers including Comcast with 23 million, Time Warner with 12 million, Direct TV with 20 million and Dish Networks with 14 million. And Charter would join this group if they are able to buy Time Warner.

So who can compete with those kinds of numbers? I can think of several that already have more customers than Comcast. Netflix is one, with over 33 million subscribers. It is not much of a stretch to see NetFlix as a cable killer if they can get enough additional programming to lure people permanently away from cable.

Interestingly, the company that has quietly built a huge pile of potential customers is Apple. They have sold over 20 million Apple TVs. And worldwide they have sold over 170 million iPads, many of them in the US. It’s been rumored for years that Apple was on the verge of announcing a programming blockbuster, and perhaps they have just been waiting to get enough Apple hardware platforms into the marketplace before trying to lure the programmers. This company destroyed the music industry in just a few years and perhaps they can do it again with cable.

And we can’t forget Google. Google has been rumored to be thinking about bidding on the NFL Sunday Package when it comes up for renewal. One thing that Google has that nobody else has is the ability to throw billions at launching a new effort in a hurry. Sports programming is one thing that could lure people off of traditional cable and it is not too hard to imagine Google outbidding everybody else for the NFL and a few other sports networks and then also swinging a deal with ESPN.

There is also the upstart Aereo. Assuming the courts don’t stop them, they will be in every medium and large tier market within a few years and building up a big customer base that is already spending money for alternate programming. While they are only streaming a limited line-up today, they already have the technology in place to support a huge line-up through the air.

It seems to me like it is going to be very hard for programmers to keep ignoring some of these companies. Now that traditional cable is losing customers every quarter it is going to become easier and easier for programmers to do the math and to see that they could get revenues from both the traditional cable operators and the new upstarts. There is no love lost between the programmers and the cable companies and the programmers will make new deals when the math looks right.

If I had to pick a winner from that pile of candidates it would be either Google or Apple. Google is capable of buying the sports market and luring away the many sports fans. Apple could begin offering alternate programming in a hurry through its huge embedded hardware base. And perhaps, the real answer is – all of the above. Once a few programmers decide to break the traditional monopoly they are likely to make a deal with anybody who will give them money for their content. If that happens, the traditional cable companies are toast in terms of keeping any cable monopoly. But they will always be relevant as the largest ISPs in the country.