Categories
Technology The Industry

Can Cable Networks Deliver a Gigabit?

coax cablesTime Warner Cable recently promised the Los Angeles City Council that they could bring gigabit service to the city by 2016. This raises the question – can today’s cable networks deliver a gigabit?

The short answer is yes, they are soon going to be able to do that, but with a whole list of caveats. So let me look at the various issues involved:

  • DOCSIS 3.1: First, a cable company has to upgrade to DOCSIS 3.1. This is the latest technology from CableLabs that lets cable companies bond multiple channels together in a cable system to be able to deliver faster data speeds. This technology is just now hitting the market and so by next year cable companies are going to be able to have this implemented and tested.
  • Spare Channels: To get gigabit speeds, a cable system is going to need at least 20 empty channels on their network. Cable companies for years have been making digital upgrades in order to cram more channels into the existing channel slots. But they also have continued demands to carry more channels which then eats up channel slots. Further, they are looking at possibly having to carry some channels of 4K programming, which is a huge bandwidth eater. For networks without many spare channels it can be quite costly to free up this much empty space on the network. But many networks will have this many channels available now or in the near future.
  • New Cable Modems: DOCSIS 3.1 requires a new, and relatively expensive cable modem. Because of this a cable company is going to want to keep existing data customers where they are on the system and use the new swath of bandwidth selectively for the new gigabit customers.
  • Guaranteed versus Best Effort: If a cable company wants to guarantee gigabit speeds then they are not going to be able to have too many gigabit customers at a given node. This means that as the number of gigabit customers grows they will have to ‘split’ nodes, which often means building more fiber to feed the nodes plus an electronics upgrade. In systems with large nodes this might be the most expensive part of the upgrade to gigabit. The alternative to this is to have a best-effort product that only is capable of a gigabit at 3:00 in the morning when the network has no other traffic.
  • Bandwidth to the Nodes: Not all cable companies are going to have enough existing bandwidth between the headend and the nodes to incorporate an additional gigabit of data. That will mean an upgrade of the node transport electronics.

So the answer is that Time Warner will be capable of delivering a gigabit next year as long as they upgrade to DOCSIS 3.1, have enough spare channels, and as long as they don’t sell too many gigabit customers and end up needing massive node upgrades.

And that is probably the key point about cable networks and gigabit. Cable networks were designed to provide shared data among many homes at the same time. This is why cable networks have been infamous for slowing down at peak demand times when the number of homes using data is high. And that’s why they have always sold their speeds as ‘up to’ a listed number. It’s incredibly hard for them to guarantee a speed.

When you contrast this to fiber, it’s relatively easy for somebody like Google to guarantee a gigabit (or any other speed). Their fiber networks share data among a relatively small number of households and they are able to engineer to be able to meet the peak speeds.

Cable companies will certainly be able to deliver a gigabit speed. But I find it unlikely for a while that they are going to price it at $70 like Google or that they are going to try to push it to very many homes. There are very few, if any, cable networks that are ready to upgrade all or even most of their customers to gigabit speeds. There are too many chokepoints in their networks that can not handle that much bandwidth.

But as long as a cable network meets the base criteria I discussed they can sell some gigabit without too much strain. Expect them to price gigabit bandwidth high enough that they don’t get more than 5%, or some similar penetration of customers on the high bandwidth product. There are other network changes coming that will make this easier. I just talked last week about a new technology that will move the CMTS to the nodes, something that will make it easier to offer large bandwidth. This also gets easier as cable systems move closer to offering IPTV, or at least to finding ways to be more efficient with television bandwidth.

Finally, there is always the Comcast solution. Comcast today is selling a 2 gigabit connection that is delivered over fiber. It’s priced at $300 per month and is only available to customers who live very close to an existing Comcast fiber. Having this product allows Comcast to advertise as a gigabit company, even though this falls into the category of ‘press release’ product rather than something that very many homes will ever decide to buy. We’ll have to wait and see if Time Warner is going to make gigabit affordable and widely available. I’m sure that is what the Los Angeles City Council thinks they heard, but I seriously doubt that is what Time Warner meant.

Categories
Improving Your Business Technology The Industry What Customers Want

The Quiet Expansion of Wi-Fi Networks

Wi-Fi (Photo credit: kristinmarshall)

I am sure I am like most business travelers and one of the first things I look for when I get to a new place is a WiFi connection for both my laptop and cellphone. Finding WiFi lets me get online with the computer and stops me from racking up data charges on my cell plan.

And for the longest time there has been very little public WiFi outside of Starbucks and hotels. But that is starting to change, at least in some places. There are several companies that have quietly been pursuing w WiFi deployments.

The biggest of these is the cable companies. It’s hard to get accurate counts of how many hot spots they have deployed. In 2012 a consortium of cable companies  – Comcast, Cox, Time Warner, Bright House and Optimum – banded together as the Cable WiFi consortium to deploy hotspots. Comcast claims that the industry has deployed over 300,000 hot spots. However, the Cable WiFi web site claims over 200,000. But whatever the number this is far larger than anybody else.

The Cable WiFi networks are offered to the customers of those companies as a mobile data extension of their service. Today these hotspots are centered around big cities – the northeastern corridor, San Francisco, Chicago, Los Angeles, Tampa, Austin and others.

The next biggest provider is AT&T which claims about 30,000 hot spots. AT&T claims over 705 million WiFi connections onto its WiFi network in the fourth quarter of 2012. However, Google has announced that it is getting in the game and nobody knows how big they might get with this effort. But their first announcement is that they are taking over all of the hotspots at Starbucks Coffee (which is a lot of the AT&T hotspots).

The cable companies have been deploying the hotspots in several ways. In some communities they are installing them on utility poles. In other situations they are going into establishments similar to the Starbucks WiFi.

WiFi is becoming more and more important to people’s daily life, so this trend is going to be very popular. Cellphone plans are getting stingier and stingier with cellular data at the same time that cell phones and tablets have the ability to use more and more data. If that data is not offloaded onto WiFi networks then customers are facing some gigantic cellphone bills.

WiFi is never going to be a replacement of cellular. For example, the technology used and the spectrum used make it very difficult to do dynamic handoffs like happens with your cell phone. You can literally walk out of WiFi coverage on foot where cellular coverage will stick with you driving at speeds of 60 miles per hour.

But people are finding more and more uses for WiFi all of the time, and so the desire for public WiFi is probably going to explode. The cable companies report that every time they open a new hot spot that usage explodes soon after people figure out it is available. One area where they have seen the biggest use is at the Jersey shore where vacationers and visitors are relieved to find WiFi available.

Anybody building a fiber network ought to consider a wireless deployment. There are several ways to monetize the investment. The obvious revenue from WiFi is through daily, weekly and monthly usage fees. But if you are a triple play provider, a more subtle benefit of wireless is in making your customers stickier since you are giving them a mobile component of their data service. Another revenue stream is to sell prioritized WiFi access to the local municipality, electric company and others, with priority meaning that their employees get a prioritized access to the network, with first responders trumping everybody else. There are also smaller revenue streams such as earning commissions on the DNS traffic for people who purchase products over your WiFi network.

Categories
Current News Technology

How Vulnerable is the Internet?

OLPC: XO internet access (Photo credit: Wikipedia)

A question you hear from time to time is how vulnerable the US Internet backbone is in terms of losing access if something happens to the major hubs. The architecture of the Internet has grown in response to the way that carriers have decided to connect to each other and there has never been any master plan for the best way to design the backbone infrastructure.

The Internet in this country is basically a series of hubs with spokes. There are a handful of large cities with major regional Internet hubs like Los Angeles, New York, Chicago, Dallas, Atlanta, and Northern Virginia. And then there are a few dozen smaller regional hubs, still in fairly large cities like Minneapolis, Seattle, San Francisco, etc.

Back in 2002 some scientists at Ohio State studied the structure of the Internet at the time and said that crippling the major hubs would have essentially crippled the Internet. At that time almost all Internet traffic in the country routed through the major hubs, and crippling a few of them would have wiped out a lot of the Internet.

Later in 2007 scientists at MIT looked at the web again and they estimated that taking out the major hubs would wipe out about 70% of the US Internet traffic, but that peering would allow about 33% of the traffic to keep working. And at that time peering was somewhat new.

Since then there is a lot more peering, but one has to ask if the Internet is any safer from catastrophic outage as it was in 2007? One thing to consider is that a lot of the peering happens today at the major Internet hubs. In those locations the various carriers hand traffic between each other rather than paying fees to send the traffic through an ‘Internet Port’, which is nothing more than a point where some carrier will determine the best routing of the traffic for you.

And so peering at the major Internet hubs is great way to save money, but it doesn’t really change the way the Internet traffic is routed. My clients are smaller ISPs, and I can tell you how they decide to route Internet traffic. The smaller ones find a carrier who will transport it to one of the major Internet hubs. The larger ones can afford diversity, and so they find carriers who can carry the traffic to two different major Internet hubs. But by and large every bit of traffic from my clients goes to and through the handful of major Internet hubs.

And this makes economic sense. The original hubs grew in importance because that is where the major carriers at the time, companies like MCI and Qwest already had switching hubs. And because transport is expensive, every regional ISP sent their growing internet traffic to the big hubs because that was the cheapest solution.

If anything, there might be more traffic routed through the major hubs today than there was in 2007. Every large fiber backbone and transport provider has arranged their transport networks to get traffic to these locations.

In each region of the country my clients are completely reliant on the Internet hubs. If a hub like the one in Dallas or Atlanta went down for some reason, ISPs that send traffic to that location would be completely isolated and cut off from the world.

There was a recent report in the Washington Post that said that the NSA had agents working at only a handful of major US Internet pops because that gave them access to most of the Internet traffic in the US. That seems to reinforce the idea that the major Internet hubs in the country have grown in importance.

In theory the Internet is a disaggregated, decentralized system and if traffic can’t go the normal way, then it finds another path to take. But this idea only works assuming that ISPs can get traffic to the Internet in the first place. A disaster that takes out one of the major Internet hubs would isolate a lot of towns from the region around it from having any Internet access. Terrorist attacks that take out more than one hub would wipe out a lot more places.

Unfortunately there is no grand architect behind the Internet that is looking at these issues because no one company has any claim to deciding how the Internet workd. Instead the carriers involved have all migrated to the handful of locations where it is most economical to interconnect with each other. I sure hope, at least, that somebody has figured out how to make those hub locations as safe as possible.

Categories
Current News Technology The Industry

Spying on our Internet Infrastructure

English: NSA EMPLOYEES ONLY Français : NSA employés seulement (Photo credit: Wikipedia)

Everybody I know in the telecom industry has been following the controversy surrounding the allegations that the NSA has been gathering information on everybody’s Internet usage. What I find somewhat amusing are the smaller ISPs who are telling people that they have not cooperated with the NSA, and that it is ‘safe’ for customers to use them. That is a great marketing ploy but it far from the truth. The Internet infrastructure in the country is very complex, but for the most part the data traffic in the country can be characterized in three ways: core Internet, peering and private traffic.

The private data traffic is just that. There are huge numbers of private data connections in the country that are not part of the ‘Internet’. For example, every banking consortium has a private network that connects branches and ATMs. Large corporations have private connections between different locations within the company. Oil companies have private data circuits between the oil fields and headquarters. And for the most part the data on these networks is private. Most corporations that use private networks do so for security purposes and many of them encrypt their data.

The FBI has always been able to get a ‘wiretap’ on private data circuits using a set of rules called CALEA (Communications Assistance for Law Enforcement Act). The CALEA rules proscribe the processes for the FBI to use to wiretap any data connection. But over the years I have asked hundreds of network technicians if they have ever seen a CALEA request and from what I can see this is not a widely used tool. It would require active assistance from telecom companies to tap into private data circuits, and there just does not seem to be much of that going on. Of course, there is also not a lot of likelihood in finding spy-worthy information in data dumps between oil pumps and many of the other sorts of transactions that happen on private networks.

But the NSA is not being accused of spying on private corporate data. The allegations are that they are monitoring routine Internet traffic and that they possess records of every web site visited and every email that is being sent over the Internet. And it seems plausible to me that the NSA could arrange this.

The Internet in the US works on a hub and spoke infrastructure. There are major Internet hubs in Los Angeles, New York, Atlanta, Chicago, Dallas and Washington DC. Most of ‘Internet’ traffic ends up at one of these hubs. There are smaller regional hubs, but all of the Internet traffic that comes from Albuquerque, Phoenix, San Francisco, Las Vegas and all of the other cities in that region will end up eventually in Los Angeles. You will hear ISP technicians talking about ‘hops’, meaning how many different regional hubs an Internet transmission must go through before it gets to one of these Internet hubs.

So when some smaller Internet provider says that the NSA does not have their data they are being hopeful, naive or they are just doing PR. I recall an article a few months back where Comcast, Time Warner and Cox all said that they had not cooperated with the NSA and that it was safer to use their networks than to use AT&T and Verizon, who supposedly have cooperated. But everything that comes from the Comcast and Cox networks ends up at one of these Internet hubs. If the NSA has figured out a way to collect data at these hubs then there would be no reason for them to come to the cable companies and ask for direct access. They would already be gathering the data on the customers of these companies.

But then there is the third piece of the Internet, the peering network. Peering is the process of carriers handing data directly to each other rather than sending it out over the general Internet. Companies do this to save money. There is a significant cost to send information to and from the Internet. Generally an ISP has to buy transport, meaning the right to send information through somebody’s fiber cable. And they have to buy ‘ports’ into the Internet, meaning bandwidth connection from the companies that own the Internet portals in those major hubs. If an ISP has enough data that goes to Google, for example, and if they also have a convenient place to meet Google that costs less than going to their normal Internet hub, then they will hand that data traffic directly to Google and avoid paying for the Internet ports.

And peering is also done locally. It is typical for the large ISPs in large cities to hand each other Internet traffic that is heading towards each other’s network. Peering used to be something that was done by the really large ISPs, but I now have ISP clients with as few as 10,000 customers who can justify some peering arrangements to save money. I doubt that anybody but the biggest ISPs understand what percentage of traffic is delivered through peering versus directly through the more traditional Internet connections.

But the peering traffic is growing all of the time, and to some extent peering traffic can bypass NSA scrutiny at the Internet hubs. But it sounds like the NSA probably has gotten their hands on a lot of the peering traffic too. For instance, a lot of peering traffic goes to Google, and so if the NSA has an arrangement with Google then that catches a lot of the peering traffic.

There certainly are smaller peering arrangements that the NSA could not intercept without direct help from the carriers involved. For now that would be the only ‘safe’ traffic on the Internet. But if the NSA is at the Internet hubs and also has arrangements with the larger companies in the peering chains, then they are getting most of the Internet traffic in the country. There really are no ‘safe’ ISPs in the US – just those who haven’t had the NSA knocking on their doors.

Exit mobile version