The Basics of Big Data

DARPA_Big_DataI read all of time how big data is going to transform our lives. Big data is supposed to make our lives better by sorting through the data that surrounds us to help us make sense out of the chaos. This will be accomplished using tools of the new science / technology called analytics.

The most commonly used tools that make some basic sense out of big data are called descriptive analytics. This is the process of screening big data sets to produce statistics that we can understand. In the simplest sense descriptive analytics is used to count and tally data into understandable pieces.

Descriptive analytics are used to do things like track hits on web sites, to track followers on social media sites and to track other statistics like page views or any other statistic that involves basic counting. One of the more well-known uses of descriptive analytics in our industry is when the cable and cellphone companies track the amount of data that customers have used during the month to apply against data caps. If you recall, some of the big companies like Comcast had a really difficult time getting this right and some people still say that they are not accurate. This illustrates that descriptive analytics does not necessarily mean simple counting and can involve tracking more complex pieces of the larger data set.

A more complicated type of analytics is predictive analytics. This is the process of not just analyzing the data, but then trying to make predictions about what might come next. The programs used to analyze the big data for this purpose use a number of statistical, modeling and data mining techniques to makes some sense out of the data. These techniques do not really predict the future, but rather look at existing and probable outcomes and calculate the percentage probability of different scenarios.

For example, you read all of the time how companies like Facebook or Google can figure out all sorts of things about you, such as whether you are an alcoholic or have insomnia or if you are just starting a new relationship. They do this by comparing data they have gathered on you to data from millions of other users. These companies look at your behavior, and when you start to resemble a known behavior pattern they used predictive analytics to start to fill in the gaps to paint a probable picture of you. For example, they will probably not know for sure that you are an alcoholic or have diabetes, but they can calculate the likelihood that you fit one of those known patterns.

This is the where the use of big data starts to concern many people. As the techniques used to analyze big data about people get better these companies might come to know more about you than you might know about yourself. For instance, I’ve read that Facebook is getting fairly good at predicting when relationships between couples are coming to an end. Most couples in this situation probably know this as well, but over time Facebook will probably get good at sensing this a lot sooner than the average person will be able to do. After all, people are sometimes very unaware of their own behavior patterns, but a company like Facebook, especially when combined with data gathered from other sources can paint a detailed and accurate picture of you.

The final kind of analytics is called prescriptive analytics and this takes the the trends and statistical possibilities found through predictive analytics are uses them to suggest solutions to problems. We are still a long way from trusting computers to use prescriptive analytics to solve specific problems. But already today we can uncover unsuspected trends in the analysis of big data and the computer can then suggest several solutions to fix those problems, and assign a statistical probability of the potential success of each solution. We are in the infancy of this process, but this is the hoped for end game from analyzing big data.

The Internet of Things is counting on success in the techniques of prescriptive analytics. In the near future there will be many more big data sets generated about each person from a number of sources like medical monitors, home security systems. location monitors and multiple other monitors in our lives. When these data sets are combined with the things we do such as write emails, search web sites, text our friends, there will be a detailed set of data created about each one of us.  For example, let’s say that we feel queasy one evening. Big data will be able to suggest that this might have been due to the fact that we walked close to glen full of oak trees in full pollen that afternoon or that it might have come from the shrimp we had for lunch and that a few other people who ate at that restaurant are experiencing the same feeling. Big data will be able to correlate the things that happen to us to what is happening in the wider world.

The average person is going to experience the results of big data by having something that seems like a self-aware assistant, or at least a set of programs that seem to be aware. These programs will track everything we do and will give us a whole new set of tools to understand ourselves and to control our personal world better. But these same big data sets could also be used by others to know things about us that we want to keep private. Probably the scariest thing about this kind of analytics is that everybody has secrets they would prefer to not reveal and these analytics tools can go a long way towards uncovering these little secrets we all keep.Today we are still exploring the techniques that will help us make sense of big data, but as that starts working we are also going to have to find ways to protect our privacy.

Politics and Net Neutrality

Capitol_domePolitics is always around in our industry but it is mostly out of sight. The big telecom companies maintain hordes of lobbyists to push their interests, but this is mostly done out of the public eye. It’s been a rare thing during my career to see a telecom issue play out big in the news and it’s only happened a few times. I remember a flurry of politics during the passing of the Telecommunications Act of 1996. But I’m not sure many people outside the industry paid a lot of attention to that. I remember some louder politics that same year when they passed the Communications Decency Act that tried to get pornography off the Internet. But for most of my career it has been rare to have politics intersect visibly with the industry.

Our industry makes the news fairly often, but the headlines are usually about things like mergers rather than about politicians debating both sides of a telecom issue. But net neutrality has grown to be an issue that is in the news every week. Net neutrality was political news once before, but it’s very different this time around. In 2011 republicans tried to pass a bill to repeal the FCC’s net neutrality rules. But that bill never got a lot of traction and nobody outside of the industry probably even heard about it.

All of a sudden net neutrality is being discussed everywhere. It’s even made it into popular culture. Stephen Colbert and John Stewart both launched into funny diatribes in favor of net neutrality. John Oliver went on a 13-minute rant about net neutrality and got so many people to contact the FCC that it crashed their servers. The advocacy group The Free Press along with 85 other organizations delivered over a million signatures on a petition to the FCC asking then to enforce net neutrality. That’s a lot of signatures and Google shows that only a few other petitions have gotten that many signatures, including one earlier this year in protest of the Russian figure skating judges in the Olympics. This is getting into rarified air in the world of popular culture.

Polls seem to make it clear that the majority of people don’t want the big carriers to mess with the Internet. Yet, perhaps sadly, politicians are weighing in on net neutrality straight down party lines like with so many other issues these days. Earlier this year a number of house republicans sent a petition to the FCC asking them to halt any considerations of imposing net neutrality rules. Last week the democrats in the house and senate proposed bills that would prohibit the FCC from allowing ‘fast lanes’. You can’t look at political news lately without seeing another politician saying something about net neutrality.

I don’t know what to make of all of this. As somebody who works in the industry I generally hope that we are able to work out our own issues in the normal fashion – which is to have the FCC issue a new order and then have the courts decide which parts of the provisions are legal and sustainable. It’s a bit of an awkward system because it often takes a few years between first order and final implementation, but it mostly has been working.

Right now because of the court order overturning the FCC’s original net neutrality order we are operating in a vacuum on the issue. There are no rules in place at the FCC that require or ban most carrier practices in this area. We instead have some vague rumblings from FCC commissioners telling carriers to not do anything too outrageous or they will face some unknown consequences.

Perhaps I should take some solace that we currently have a split House and Senate, each controlled by a different party. This puts net neutrality in political limbo and it is highly unlikely that either party will be able to do anything about the topic from a legislative perspective.

But I am not comforted by that limbo, because my fear is that over the course of a decade or so that each of the parties might have a time where they have enough votes to change the net neutrality rules to their liking. I envision one set of rules being put in place by one party and then those rules overturned when the next party gets into power. What this means in practical terms is that the industry will be in limbo over the topic for a long time, never quite able to trust whatever rules are in place at any given time. And the one thing I have seen in this industry is that uncertainty is a bad thing. Uncertainty in this industry often ends up getting manifested by cutbacks in capital spending in the areas of concern. The last thing we need is for carriers to be worried about making the investments needed to keep the Internet fast. Because if that happens, we all lose and net neutrality won’t be that important if the whole Internet gets impaired.

The Early Battle for the Internet of Things

Goneywell LyricThere are already a number of players aiming to become the primary player in the residential market for the Internet of Things. I think all of them see that this as a numbers game and that whoever can gain customers the fastest has the opportunity to become the largest. And so we are going to start seeing fierce battles in the marketplace. As you would imagine, every competitor is going about this in a different way.

Google is probably making the most news in the area, because with their billions they are buying themselves into the business. Google paid $3.2 billion for Nest, a maker of smoke detectors and thermostats. Last week they announced the purchase of video camera maker DropCam for $555 million. Google also acquired Boston Dynamics and DeepMind, two firms whose robotics research is aimed at developing artificial intelligence. And one can expect Google to continue to buy the piece parts needed to put together a fully integrated suite of IoT products.

This week Google also announced their primary strategy which is to open up their IoT platform to outside developers. They envision a world where the apps have as much or more value than the hardware. Google wants to sell hardware and control the base platform over which others develop apps to provide customization for customers.

Apple is taking a very different approach and wants to become the software platform that makes everything work together. Apple believes there will be numerous manufacturers of smart devices (in fact, most of the things in our homes will become smart over time) and they don’t believe consumers are going to want to be tied to one propriety package of devices, but will want anything they buy work with everything else. So Apple is trying to put together that platform, which they call HomeKit that will connect your garage door opener, your door locks, your thermostat and everything else together to give homeowners a customized suite of products that suit them. Apple is also an open platform that allows outside developers to create apps.

Honeywell just announced that it plans to offer significant competition to Google. Honeywell is the largest manufacturer today of thermostats and they announced a slick mobile-controlled thermostat they called Lyric. But Lyric is not just a thermostat and is going to be the base of a full suite of home automation products – all DIY and all controlled by smartphones. Honeywell is not only building their own platform, but they are also hedging their bets by working with the Apple HomeKit Program.

There are also smaller companies trying to crack into the market and who are hoping that by being early they can gain market share. For example, Lutron is the largest manufacturer of lighting control systems and they are expanding that platform to become the hub for integration to other devices. They think they have an edge since they already having lighting platforms in millions of homes.

And there are a number of start-ups chasing the market. Revolv has introduced a slick box that does a pretty good job today of integrating different devices into a coherent package. ALYT is a crowd-sourced start-up that plans to provide a full suite of communications technologies from Bluetooth through cellular making it easier to communicate with any device or the outside world.

This is going to be an interesting battle to watch. Each of these firms has taken a different approach. I certainly don’t have a crystal ball, but I am going to bet that the one that makes all of this work the easiest is going to have the best chance of winning the battle. But one can also suspect that for decades that multiple companies will own a decent segment of the market as they appeal to different groups of customers. But it’s hard to bet against Apple and Google not being two of the largest players. Each is creating an open platform for developers to create apps, and those apps are likely to give them an edge over any proprietary systems.

The Battle for the Integrated Car

Tribrid_CarGoogle is expected to unveil a smart car operating system later this month at its upcoming developer’s conference. This follows upon an announcement at the beginning of this year of the creation of the Open Automotive Alliance which consists of Google, chipmaker Nvidia along with General Motors, Honda, Audi and Hyundai.

This system would be obviously Android based and would allow for the full integration between an android phone and your car. The car software would automatically recognize and integrate with your smart phone so that you could perform phone functions without having to look away from the dashboard.

This is direct competition with Apple’s CarPlay which is also supposed to be available sometime this year. Apple has said that their software for IoS phones would let your car do things like send and receive emails and texts and use GPS navigation from applications on the phone. Apple has allied with Ferrari, Honda, Mercedes Benz and Hyundai.

A lot of cars already have software that allows the same basic functions. For example, my wife’s Toyota has a Bluetooth system that lets her sync with her music or to sync with Siri and do all of the things Siri can do. And my Ford truck has something similar, although it has reset itself three times in six months and leaves a bit to be desired in terms of ease of use.

Today’s platforms are largely proprietary and both industry groups are trying to bring a standard platform to the industry, because the real end game and the big dollars come from the ability to develop and sell apps that can be used specifically for driving. For instance, today my wife can use her Siri for navigation, but she cannot activate a separate navigation app should she choose to use something different. For example, I can envision specialty navigation apps that might be used by vacationers, truckers or business travelers, all who have different travel goals.

So these two industry giants are going to battle it out, mostly with car manufacturers, to become the de facto smart phone integration platform. Google has the early lead, just due to having signed up General Motors, but the battle is far from over. And as can be seen by noticing that Honda and Hyundai are working with both groups, perhaps they both win and cars can come equipped with one or both systems.

This is very different than Google’s self-driving car project which is still moving steadily forward. Earlier this year Google described how they make this work, and it is a solution that only Google could pull off.

Today Google’s cars are driving successfully around Mountain View California. The company has put in hundreds of thousands of miles of driving on those City streets. I always thought that Google would make self-driving cars work by having them learn all of the little nuances of what it takes to drive a car. But as it turns out, that is going to require something very akin to self-aware artificial intelligence and nobody is very close yet to having achieved that.

Instead Google has done the brute force solution where they have thoroughly mapped every inch of Mountain View. Thus, the car already knows what to expect. The car is not completely dumb, of course and is very good at recognizing other cars, and bicyclists and pedestrians. But by taking away the need for it to understand the streets, Google has vastly reduced the computational need of the system.

So a Google car in Mountain View already knows every inch of the streets. If it comes across something unexpected, say construction, it will alert the driver to take over the driving task if it feels unable to navigate the unexpected phenomenon. This is a solution only Google could do, because to take this technology outside of Mountain View they will have to completely map other towns. And that doesn’t scare Google. They would look at the project of mapping all of the streets in major towns in the country as an opportunity to update their maps and to learn more about the world.

One can envision Google cars that are really good at getting back and forth to work, to the grocery store and to a friend’s house. But if you wanted to visit your mother in the country the car would hand the driving back to you. Google sees this as an economically feasible product up until the point one day when cars really can learn the streets on the fly.

How Do You Handle Cyber-Harassment?

isp--w299h202This blog asks the question of how ISPs respond to claims of cyber-harassment. What do you do when one of your customers is accused of harassing somebody else? We’ve all heard of terrible cases where cyber-harassment has led to suicide or other terrible results. But there are many degrees of harassment and I am curious how different ISPs respond to claims of harassment.

Are there any legal requirements that you do anything? Obviously ISPs all will respond to subpoenas or court orders that order somebody to end harassing another person. But this is the end result of a legal action and most harassment never makes it the whole way through the legal system.

There are only a few other legal requirements to consider. First is the Communication Decency Act which the US Congress passed in 1996 that was the first attempt to legislate the Internet. The Act had provisions that tried to outlaw pornography, but the Supreme Court knocked out these parts of that law. But other provisions of the law are still in effect. The one that matters in this instance is the determination that ISPs are not the ‘publishers’ of content posted by your users, and thus you are immune from prosecution for things said and done by your customers on-line.

But most ISPs have a document that you have generated that self-imposes some obligations on you. This is your terms of service that you have customers sign as they connect to your Internet service. It’s pretty routine in those documents to have language that gives you the right to cut off service from customers who use the Internet for nefarious purposes. Sometimes the language in these documents is very generic and sometimes it says very specifically that you will not tolerate customers who harass others on the Internet.

So you should review the TOS occasionally to remind yourself what specific obligations you might have created for yourself. In this document you have not only defined what customers cannot do on your network, but you have also implied that you will react in some way to customers who violate your rules.

Beyond these very minimal obligations there are no other specific external rules. Your customers are generally free to publish all sorts of thing that you or the public might find repugnant. They might post white supremacist or neo-Nazi pages that spew hate at other races. They might bash gays, or bash Muslims, or bash Christians. They might bash the band the Grateful Dead (my own version of intolerable behavior!). But generally, as long as these postings don’t cross a legal line your customers are within their first amendment rights to post all sorts of disagreeable things.

It’s hard to define it exactly, but there is some point where a customer has crossed the line if what they say is aimed at an individual and not at the wider world. There are many forms of cyber-harassment that include: sending harassing emails, creating false web pages to defame somebody, disseminating false or private information online, uploading unauthorized pictures or videos, impersonating another in a public forum, spreading false rumors through social media sites and many other things.

The chances are that if you don’t hear about this from law enforcement that you will be contacted by the person being harassed. It’s likely that they will show you examples of what your customer has been doing. And this is when you have the hard decision to make. Has your customer clearly violated your terms of service? If so, what are you willing to do about it? Do you warn them? Do you ask them to cease the bad behavior? Do you just toss them off your network?

Let’s face it. Most of the people who run ISPs are pretty good guys. Nobody wants to be running a service that is contributing to other people being harmed. But it is very uncomfortable being asked to be judge and jury. We have all written our terms of service to be somewhat murky on purpose. And that means that there are going to be many instances when you will agree that something is unsavory without necessarily being able to say it is a clear-cut violation of your terms of service. And even if it violates your TOS, there are always degrees of violation.

It’s not an easy question and every ISP I talked to about this felt really uncomfortable with this part of being an ISP. Almost all of them have had troublesome customers where this sort of behavior occurred. But the responses range from doing nothing, to warning the customer to cease the bad behavior, to cutting them off the network. One protection that most of you have is that your Internet service is not considered a utility service and so you have the legal right to choose who is (or in this case who is not) your customer. But that still does not make this easy.

More Trouble for Google and the Internet

120310censorshipI find myself feeling a bit sorry for Google, and that is not easy to do. One tends to think of them as a very powerful corporation. But as powerful as they might be in some ways, they just got another absurd court ruling that has to have them scratching their heads.

I wrote about another absurd court ruling over a month ago when a Spanish court ordered Google to let people expunge embarrassing things from the Internet. The facts behind that ruling was that a man was embarrassed that he had been listed years ago in a newspaper as delinquent on the tax payments on his home. It was never disputed that he hadn’t paid his taxes on time. But the court still ruled that he has the right to ask Google to expunge the embarrassing material from the web.

Now comes a judge in Canada who is ordering Google to take more content off the Internet. The facts this time center around Equustek Solutions that claims that a rival company stole their technology for an Ethernet gateway and is illegally profiting from their intellectual property. The courts agreed it is theft, but rather than go after the normal commercial solutions the judge turned to Google and told them to remove all ads by the competition from the web.

Google offered to remove all references from Google.ca which is where most Canadians use Google. But the Supreme Court of British Columbia said that was not good enough and ordered Google to remove the references world-wide. At first glance one might say that this is good justice. Assuming that the court is right and that the intellectual property was stolen this provides justice of a sort for Equustek Solutions. But once you think more about it, this is an absurd ruling for a whole lot of different reasons.

First, there are already mechanisms in place to deal with international theft of goods and ideas. Countries have treaties, trade agreements and diplomats to deal with this kind of theft – something that happens all of the time. These mechanisms may not always work the best, but they are how the world as a whole deals with these things. It’s very questionable if any one court anywhere has the jurisdiction to override trade treaties agreed to by their own government and other governments

Further, Google is not the only web source for the stolen gateways and there are other ways for people to continue to find the illegal devices. People who shop at a favorite supply house are still going to find them. People using other search engines like Bing are still going to find them. People who shop at Amazon are likely to still find them

This may not sound like a bad precedent, but it allows a court in one country to order Google, or any web company to remove content that they find offensive. I don’t think there will be many people defending the right of a company to sell stolen patented devices. But little legal precedents grow into big rights.

This ruling could quickly get escalated once other judges hear about it. The judge in Canada said that the ruling was based in part on what had happened in Spain. But what’s next? What if a court in Iran asks Google to remove references to all books by Salman Rushdie from the web since he is an infidel? What if a court in some conservative American state asks Google to remove all content related to abortion and birth control? What if the Syrian government asks to remove any news about their fight with other factions in the country?

At the end of the day this ruling condones censorship, plain and simple. It puts Google into the huge bind of agreeing to be the world’s censor. I am sure that Google is appealing this to higher courts in Canada, but in the meantime do they comply with the order? A part of me hopes that they simply ignore the order and ignore any fines associated with the order. This is a rogue ruling by a rogue court and in the end will probably be struck down within Canada.

But the much bigger issue is what Google is going to do when they are confronted with a bigger moral dilemma? What do they do with one of the absurd orders that comes out of the Supreme Court of a major country and can’t be appealed? Does Google comply and censor the whole world or do they pull out of the country making the request? In both cases the world loses and the Internet gets diminished.

I guess it was inevitable that this had to happen. The Spanish ruling was pure insanity. They guy didn’t pay his taxes on time and all Google did was to scan a database from a newspaper that reported it. The newspaper had the right to publish this and so Google had the right to scan it. Facts are facts and we are starting down a slippery slope when we start picking and choosing which facts are allowed on the Internet. We already know where censorship leads – look at China where hordes of people ride herd every day on what the Chinese people are allowed to read or say on the web. Let’s please not let that system get foisted onto the rest of us.

Maybe Finally a Faster WiFi

Wi-FiThe first wave of 802.11ac WiFi routers are starting to show up in use and already there is something faster on the horizon. IEEE has announced that they are starting to work on a new standard named 802.11ax and it looks like the new standard might be able to deliver on some of the hype and promises that were mistakenly made about 802.11ac. This new standard probably is not going to be released until 2018.

I call it unfortunate because 802.11ac has widely been referred to as gigabit WiFi but it is not even close to that. In the real world application of the technology it’s been reported that the ac routers can improve performance over today’s 802.11n routers by between 50% and 100%. That is a significant improvement and it is shame that the marketing hype of the companies that push the technology has created an unfulfillable expectation for these routers. I refer you to my earlier blog that compares the reality to the hype.

The gigabit name given to 802.11ac has more to do with the increased capacity of the router to handle large bandwidth than it did with the connection speeds to any given device. But the 802.11ax standard is going to turns its attention to increasing the connections to users. The early goal of the new standard is to increase bandwidth to devices by as much as 4 times over what can be delivered with 802.11ac.

This improvement is going to come through the use of MIMO-OFDA. MIMO is multiple input – multiple output and refers to a system that has multiple antennas in the router. Devices can also have multiple antennas although that’s not required. OFDA stands for orthogonal frequency division multiplexing and is a standard used in 4G wireless networks today

The combination of those two techniques means that more bits can be forced through a single connection to one device using a single receiving antenna. Making each individual connections from the router more efficient will improve the overall efficiency of the base router.

Interestingly, Huawei is already using these techniques in the lab and they are experiencing raw data rates as fast as 10 gigabits from a router. Huawei is one of the leaders of the 802.11ax standards process and they don’t believe these routers will be market ready until at least 2018

What I find most puzzling in today’s environment is that a lot of vendors have bought hook, line and sinker into the 802.11ac hype . For example, it’s been reported that a number of FTTH vendors and settop box vendors are touting the use of 802.11ac instead of cabling to route TV signals around a home. This might work for single family homes on large lots where there won’t be a lot of interference, but I can foresee many situations where this is going to be a challenge

Certainly there is a lot of chance for interference when you try to do this in an urban environment where living units are crammed a lot closer together. I highlighted some of the forms of WiFi interference in another earlier blog. But there are always other situations where WiFi will not be a great solution for transmitting cable signals between multiple sets. For example, there are plenty of older homes built in the fifties or earlier that have plaster walls with wire mesh lathe which can stop a WiFi signal dead. And there are homes that are larger than the range of the WiFi signal when considering walls and impediments.

But it looks like the 802.11ax standard will finally create enough bandwidth to individual devices to enable WiFi as a reliable alternate to cabling within a house. My fear is that there are going to be so many cases where 802.11ac is a problem that WiFi is going to get a bad name before then. I fear the vendors who are relying on WiFi instead of wires might have been a generation too premature. I hope I’m wrong, but 802.11ac does not look to be enough of an improvement over our current WiFi that it can act as a reliable alternative to wires.

The Battle of the Network Switches

Cisco_Media_Convergence_ServersYesterday Facebook announced that it has successfully built an open-source network switch. This is really big news in an industry where Cisco and Juniper together have more or less cornered the switch market. The Facebook switch has been named Wedge and is operated by an open-source software platform they called FBOSS. This has been created as part of the Open Compute Project (OCP) started by Facebook but now involving many other companies. The goal of this project was to radically change the way companies buy hardware and software, and it is starting to achieve those goals.

 

This announcement is going to shake up the $23 billion Ethernet switch market in the same way that the introduction of the softswitch killed the duopoly on voice switches once held by Nortel and Lucent. I’ve written earlier about how the Ethernet switch industry is moving towards software-defined networking (SDN). The goal of SDN is to take features that have baked into hardware, such as security and device management and make those functions software controlled.

 

Cisco has already introduced their own version of SDN and they now have software that will control their various devices. But honestly this is only a modest change for them, because at the end of the day all of their hardware and software is proprietary. We are all very familiar with network engineers who need multiple Cisco certifications just to be able to operate the Cisco gear. Cisco’s SDN doesn’t really change that need for network engineers or lower the cost. It just layers a new software over top of the old platform.

 

The industry was ripe for this change because Cisco has grown into the same kind of company that we saw in Lucent and Nortel at their peak. The Cisco pricing model now includes a permanent 15% annual fee on top of any hardware you buy from them. This fee is ostensibly for upgrades and maintenance, but the people who write the checks for this don’t feel like they are getting much value from these annual checks. This sounds exactly like the kinds of pricing practice we saw in the voice industry when it was a duopoly of Nortel and Lucent.

 

Cisco has been reported to have a 60% profit margin, and so they are ripe for a challenge. Cisco is not going to go away easily and they have been very clever in the way they have shaped the network switch market. That market is operated by and decisions made by switch engineers, all of whom Cisco has made certain have a long list of Cisco certifications. And frankly, the OCP initiative is aimed directly at getting rid of those network engineers, in the same way that cloud computing is doing away with server engineers.

 

Certainly Cisco has already lost the largest customers in the market. Facebook will be going with their own new technology. It’s been reported that Amazon, Microsoft and Google all are working on their own versions of SDN servers as well, although none of them are reported to be headed towards open-sourcing like the OCP initiative. But one would think that this is going to put a massive amount of price pressure on Cisco in a few years, as ought to happen with any company that has gigantic profit margins. There are still going to be a number of network operators who are going to go with traditional Cisco for a while simply because it works and is comfortable for them. But as the OCP hardware becomes readily available and proves able to work in the market it’s going to get harder and harder to justify buying expensive and proprietary servers.

 

It took a full decade for the traditional voice switch manufacturers to fail after the introduction of the softswitch. And Cisco is probably better equipped to fight back against this change than were Nortel and Lucent. But in the early days of the softswitch I saw some of my clients cut their hardware and maintenance costs in half by going with a softswitch and it was obvious then that the newer technology would eventually win. This Facebook announcement is the first day of the decade that is going to transform the way we buy and use network switches.

 

Expanding Public WiFi

Wi-FiComcast began the process last week of turning home WiFi routers into public hotspots. They announced that they were turning up 50,000 home routers in Houston, and that this was going to be followed up nationwide with millions of home routers being opened up to allow access to anybody with a Comcast password or anybody willing to buy bandwidth by the hour.

Comcast says that this is being implemented by opening up a second channel in each router so that external users won’t be using the same bandwidth as paying customers. Comcast promises this won’t degrade the bandwidth purchased by customers. Interestingly, they are going to match the bandwidth from each public channel to match the home bandwidth that has purchased.

I must say as a Comcast customer that this feels both good and also a bit scary. It certainly would be convenient when walking around my town to be able to be connected to Comcast WiFi and not use cellphone data. And it certainly could make it convenient for me to go outside and still be able to work on my laptop or tablet. So for someone like me who is always connected this sounds promising.

But as the owner of a Comcast router of my own I am somewhat worried by the security aspects of this. There is a nagging part of my brain that tells me that even if this is done on separate channels that there are people smart enough to hack this. So I worry that this could give somebody access into what I am doing inside my own home on my own network. I hope I am wrong about this, but it seems a lot easier to think somebody could hack me when starting inside my router rather than having to start outside of it. Comcast does offer the option, for now, of turning off the second public channel of your router. I’m not sure what they’ll do if everybody chooses that option.

One thing to remember is that this is not Hotspot 2.0 which is a suite of technologies that is going to let people automatically connect to WiFi routers as they move from place to place. That new technology is supposed to come with new security features that will make it safer to be on a public WiFi router. But Comcast is still deploying current WiFi technology, and a user just has to log on one time to any Comcast hotspot and they will then automatically log on to other hotspots with the same password and ID.

Certainly as I move around town on Comcast hotspots I am going to use the same security measure that I would use at a Starbucks. I won’t log into financial institutions or make credit card purchases. Those are common sense security measures to take when sharing a hotpot with people you don’t know. But over the last few days I read a lot about hotspot security and there are a lot more dangers out there. A smart hacker can get into your computer and dig out whatever data you have stored including passwords to accounts and other damaging data. So this is the scary side of using Comcast hotspots or allowing my home router to become one of them.

I also now have to worry that I am giving Comcast the same sort of data about my whereabouts that I give to the cell phone companies. Comcast will be able to follow me as I move around and the knowledge of when and where I go has to be worth something in terms of profiling me.

Why would Comcast do this? They began deploying public hotspots in areas where they are having significant competition with Verizon FiOS. For example, it’s been reported that you can go almost anywhere on the Jersey shore and stay connected to Comcast. So in those kinds of markets it is a feature and a service that they think gives them a competitive edge.

But I see less advantage from deploying this in the average suburban neighborhood. It makes a lot of sense in downtown areas, even in small towns, where WiFi can be deployed where people shop and dine and congregate. But a WiFi signal doesn’t propagate very far from any one hotspot and so in suburban areas one can imagine your cell phone gaining and losing WiFi access as you take a walk. I shudder to think about what that is going to do to the battery on my cell phone as it constantly searches and adds and drops WiFi connections.

The big beneficiaries of this are the wireless companies and one can speculate that Comcast has figured out a way to charge them something for WiFi offload of cellular data. If not they are missing an opportunity. I know that Cisco and other manufacturers have been talking up WiFi offload as a new business line, but I have not yet heard of any specific deal being struck anywhere for this as a revenue generating service.

Is AT&T Serious About Building Fiber?

AT&T_Building_-_NashvilleAT&T has been in the news a lot lately with announcements of new markets where it will launch gigabit fiber. This all started in April of 2013 when AT&T announced that it would bring fiber to Austin, Texas immediately following the Google announcement to build there.

But since then Google upped their game and announced a long list of cities where they were considering gigabit fiber, many of them in AT&T markets. It seems that AT&T felt obligated to respond to this, and so in April of this year they released a major press announcement that they were going to bring their AT&T Uverse, with ‘GigaPower’ service to up to 100 cities and municipalities nationwide, including 21 new markets. These new markets include major NFL cities like Atlanta, Miami, Chicago, Houston, San Antonio, San Diego, Dallas, Cleveland and many other large metropolitan areas.

In the last week we started to see this announcement come to fruition when AT&T announced deals with Winston Salem and Durham, North Carolina where AT&T said they would begin fiber construction within a ‘few weeks’. AT&T is working on similar arrangements with Carrboro, Cary, Chapel Hill and Raleigh in North Carolina.

These announcements are great and it’s wonderful to see AT&T making a commitment to expand broadband with fiber instead of copper. I’ve always thought it was just a matter of time until AT&T followed Verizon and their FiOS product. AT&T has done a pretty remarkable job milking the most they can out of their copper network, but the demand for HD video and now ultra HD video is putting a huge strain on their Uverse product

But then I noticed that AT&T might be telling a different story to their shareholders. At the end of April AT&T announced that it was cutting capital spending by $2 billion per year in 2014 and 2015 compared to previous estimates. This dropped annual capital spending from $22 billion to $20 billion, still pretty large numbers. But this budget reflects spending on both wireline and wireless capital so there was no telling by that announcement what this cut meant in terms of each business line. But a few weeks ago George Notter, an analyst for Jeffries announced that within those numbers that AT&T had ‘significantly reduced’ capital spending on the wireline network.

And so one wonders how AT&T is going to fulfill all of these expansion claims? It’s going to require many billions of dollars to even get started with fiber in major markets like the ones that they have announced as fiber candidates.

Meanwhile, AT&T has been telling the FCC that they want to drop millions of copper customers as part of the IP transition. As I discussed last week, this FCC initiative was intended to figure out how to change the network that carriers use to communicate to all-IP. But AT&T has hijacked that proceeding to talk about moving copper customers to wireless service. And that transition to wireless will cost money as well.

This all just doesn’t add up. If AT&T is really getting ready to expand fiber to this many new markets then there should be a major up-tick in landline capital spending. After all, Verizon spent over $23 billion on their FiOS fiber network, and as I look at the list of cities, AT&T’s plans seem to be even larger than what FiOS has done.

So I ask, is AT&T really planning on aggressively expanding gigabit fiber to all of these markets? Even if they only intend to cherry pick in each market they would require a significant increase in capital spending.

Meanwhile AT&T will be needing a lot of cash to acquire DirectTV and one has to wonder if that has any impact on their capital plans. It’s very hard to figure out large companies and AT&T has always been the most difficult carrier to decipher. They often make conflicting statements from different parts of the business and this seems like one such case. As somebody who supports building more fiber infrastructure I hope that their claims are not just hype and that the dollars for fiber construction will appear somehow when they are needed. What I suspect is the various business lines within the company are having a furious tug-of-war right now. We’ll just have to see how this plays out. But since most of their profits come from wireless, one has to suspect that has the upper hand.