Converting to IPv6

By now most of you know that there is a new version of Internet addressing that has been introduced known as IP version 6 (IPv6). The process to integrate the new protocol into the network has already begun and it’s now time for smaller ISPs like my clients to begin looking at how they are going to make the transition. I call it a transition because the planned process is for the old IPv4 and IPv6 to coexist side-by-side until the old protocol is eventually phased out of existence. Some experts predict that the last vestiges of IPv4 addressing will survive until 2030, but between now and then every part of the Internet will begin the transition and will begin using the new address scheme.

The IPv6 specification makes major changes to internet addressing. Not only has the IP address length been extended to 128 bits but also the IP header format and the way header information is processed have been modified. Thus, transitioning from IPv4 to IPv6 is not going to be straightforward and it is going to take some work to go from old to new.

I think it is time to start thinking about how you are going to make the transition to enable both kinds of routing. Any small ISP will want to do this in a controlled and leisurely manner and not wait until there is an urgent need for it on your network. There are already some new kinds of hardware and software systems that are going to prefer to use the new protocol, and so small ISPs ought to get ready to make the change before you get a frantic call from a large customer asking why this doesn’t work on your network.

The basic process to get ready to migrate to IPv6 is to make certain that your core routers and other host systems in your network are able to handle IPv6 routing. There are three different transition techniques that are being used around the country to make the transition.

Dual-stack Network. This approach allows hosts and routers to implement both IPv4 and IPv6 protocols. This will let your network support both IPv4 and IPv6 during the transition period. This is currently the most common technique being used to introduce IPv6 into legacy networks. The biggest downside of the approach is that you must create a mirror-image IPv4 address for every new IPv6 address, and the whole point of moving to IPv6 was due to the scarcity of IPv4 addresses.

Tunnelling. This technique essentially hands off all new IPv6 routing to somebody else in the cloud. To make this work your network would encapsulate IPv6 packets while they are crossing your existing IPv4 network and decapsulate the packets at the border to the external cloud. This is somewhat complex to establish but reports are that it can work well when configured correctly.

Use a Translation Mechanism. This method is necessary when an IPv6-only host has to communicate with an IPv4 host. At a minimum this requires translation of the IP header packets, but it can get a lot more complicated.

And, as one would suspect, you can mix and match these various techniques as needed. It’s obvious to me that this could become very complex and there appears to be a lot of chances to mess up your network routing if you don’t do it right. Because of this we think it makes sense to start planning early on how you are going to make the transition. You do not want to wait until one or more of your largest customers are breathing down your neck demanding a transition, so you should start early and make a plan. Contact us at CCG and we can help you make a plan for an orderly transition to IPv6.

Opportunity Abounds

English: colorful fiber light

English: colorful fiber light (Photo credit: Wikipedia)

I am often asked about ideas for building a fiber network that can make money. Right now in this country there is a huge opportunity that almost nobody is taking advantage of. There have been tens of thousands of miles of middle mile fiber built in the last five years using federal stimulus grants. Additionally there are other networks around the country that have been built by state or other kinds of grants. And there has also been fiber built to thousands of rural cell phone towers.

These networks are largely rural and in most cases the networks have only been used to connect small rural towns and to serve anchor institutions, or built to go only to cell towers. If you look at these networks closely you will see miles and miles of fiber that goes from county seat to small town to county seat with a few spurs serving schools, health facilities, junior colleges, city halls and cell towers. But for the most part the fiber has not been used to serve anything else.

The whole stimulus grant was cooked up quickly and was not a well-planned affair. They tried to make awards in every state and we ended up with a true hodge-podge of networks being built. In some cases it looks to me like networks to nowhere were built, but a large percentage of the stimulus grants went through rural areas where there are nice pockets of customers.

For years I have advocated a business plan that builds fiber in short spurs in situations where there is guaranteed success. For example, one might build to one large business whose revenue will pay for the fiber route. Or these days that is most likely going to be a cell tower. And so building to that single guaranteed customer can be a successful business plan.

However, any carrier who stops with that one customer is missing the real profit opportunity in such a build. The best business plan I can find today is to build to an anchor tenant and then doing everything possible to sign every customer that is passed to get to that new tenant customer. In economic terms you can think of the cost of the fiber build as a sunk cost. Generally in any business when you make a sunk-cost investment the goal is then to maximize the revenue that can be generated by the sunk cost.

And so, if the anchor tenant you have found can justify the fiber build and pay for the sunk-cost investment, then adding additional customers to that same fiber investment becomes a no-brainer in economic terms. The extra customers can be added for the cost of a drop and fiber terminal device, and in terms of return, adding a home or small business might have a higher margin than the original anchor tenant.

They key to making this business plan work is to keep it simple. You don’t need to be in the triple play business to add residential customers. Offering a very high-speed data connection for a bargain price is good enough to get a good long-term customer with very little effort required by the carrier. If you happen to already be in the triple play business and have all of the back-office support for such customers then you can consider this as an option, but offering only data is a profitable business.

And so the business plan is to look around you and see where there are facilities built but underutilized. The key to making this work is to get cheap transport to reach the new pocket of customers. By law the stimulus grants need to give cheap access to somebody willing to build the last mile. But commercial network owners are going to make you a good offer also for transport if you can bring them a new revenue opportunity in a place they didn’t expect it. So the key is to first work with the network providers and then look at specific opportunities.

And you possibly don’t even need much, if any staff to do this. There is already somebody maintaining the backbone fibers and they will probably be willing to support your fiber spurs. And it’s quite easy today to completely outsource the whole ISP function. The only thing that is really needed is the cash needed to build fiber spurs and connect customers. The more you have the better you can do, but you could build a respectable little business with only a few hundred thousand dollars.

If you are in a rural area there are probably dozens, and maybe hundreds of these opportunities around you if you look for them with the right eye. As the header of this blog says, opportunities abound.

How Vulnerable is the Internet?

OLPC: XO internet access

OLPC: XO internet access (Photo credit: Wikipedia)

A question you hear from time to time is how vulnerable the US Internet backbone is in terms of losing access if something happens to the major hubs. The architecture of the Internet has grown in response to the way that carriers have decided to connect to each other and there has never been any master plan for the best way to design the backbone infrastructure.

The Internet in this country is basically a series of hubs with spokes. There are a handful of large cities with major regional Internet hubs like Los Angeles, New York, Chicago, Dallas, Atlanta, and Northern Virginia. And then there are a few dozen smaller regional hubs, still in fairly large cities like Minneapolis, Seattle, San Francisco, etc.

Back in 2002 some scientists at Ohio State studied the structure of the Internet at the time and said that crippling the major hubs would have essentially crippled the Internet. At that time almost all Internet traffic in the country routed through the major hubs, and crippling a few of them would have wiped out a lot of the Internet.

Later in 2007 scientists at MIT looked at the web again and they estimated that taking out the major hubs would wipe out about 70% of the US Internet traffic, but that peering would allow about 33% of the traffic to keep working. And at that time peering was somewhat new.

Since then there is a lot more peering, but one has to ask if the Internet is any safer from catastrophic outage as it was in 2007? One thing to consider is that a lot of the peering happens today at the major Internet hubs. In those locations the various carriers hand traffic between each other rather than paying fees to send the traffic through an ‘Internet Port’, which is nothing more than a point where some carrier will determine the best routing of the traffic for you.

And so peering at the major Internet hubs is great way to save money, but it doesn’t really change the way the Internet traffic is routed. My clients are smaller ISPs, and I can tell you how they decide to route Internet traffic. The smaller ones find a carrier who will transport it to one of the major Internet hubs. The larger ones can afford diversity, and so they find carriers who can carry the traffic to two different major Internet hubs. But by and large every bit of traffic from my clients goes to and through the handful of major Internet hubs.

And this makes economic sense. The original hubs grew in importance because that is where the major carriers at the time, companies like MCI and Qwest already had switching hubs. And because transport is expensive, every regional ISP sent their growing internet traffic to the big hubs because that was the cheapest solution.

If anything, there might be more traffic routed through the major hubs today than there was in 2007. Every large fiber backbone and transport provider has arranged their transport networks to get traffic to these locations.

In each region of the country my clients are completely reliant on the Internet hubs. If a hub like the one in Dallas or Atlanta went down for some reason, ISPs that send traffic to that location would be completely isolated and cut off from the world.

There was a recent report in the Washington Post that said that the NSA had agents working at only a handful of major US Internet pops because that gave them access to most of the Internet traffic in the US. That seems to reinforce the idea that the major Internet hubs in the country have grown in importance.

In theory the Internet is a disaggregated, decentralized system and if traffic can’t go the normal way, then it finds another path to take. But this idea only works assuming that ISPs can get traffic to the Internet in the first place. A disaster that takes out one of the major Internet hubs would isolate a lot of towns from the region around it from having any Internet access. Terrorist attacks that take out more than one hub would wipe out a lot more places.

Unfortunately there is no grand architect behind the Internet that is looking at these issues because no one company has any claim to deciding how the Internet workd. Instead the carriers involved have all migrated to the handful of locations where it is most economical to interconnect with each other. I sure hope, at least, that somebody has figured out how to make those hub locations as safe as possible.

Time for a New Spectrum Plan

The spectrum in this country is a mess. And this is not necessarily a complaint against the FCC because much of the mess was not foreseeable. But the FCC has contributed at least some to the mess and if we are going to be able to march into the future we need to start from scratch and come up with a new plan.

Why is this needed? It’s from the sheer volume of devices and uses that we see coming for wireless spectrum. The spectrum that the wireless carriers are using today is already inadequate for the data that they are selling to customers. The cellular companies are only making it because a large percentage of the wireless data is being handed off to WiFi today. But what happens when Wifi gets too busy or if there are just too many devices?

As of early 2013 there were over half a billion internet connected devices in the US. This is something that ISPs can count, so we know that is fairly accurate. And the number of devices being connected is growing really quickly. We are not device nuts in my house and our usage is pretty normal. And we have a PC, a laptop, a tablet, a reader and two cell phones connected to wireless. And I am contemplating adding the TV and putting in a new burglar alarm system which would easily double our devices overnight.

A huge number of devices are counting on WiFi to work adequately to handle everything that is needed. But we are headed for a time when WiFi is going to be higher power and capable of carrying a lot more data, and with that comes the risk that the WiFi waves will get saturated in urban and suburban environments. If every home has a gigabit router running full blast a lot of the bandwidth is going to get cancelled out by interference.

What everybody seems to forget, and which has already been seen in the past with other public spectrum, is that every frequency has physical limits. And our giant conversion to the Internet of Things will come to a screeching halt if we ask more of the existing spectrum than it can physically handle.

So let’s jump back to the FCC and the way it has handled spectrum. Nobody saw the upcoming boom in wireless data two decades ago. Three decades ago the smartest experts in the country were still predicting that cell phones would be a market failure. But for the last decade we have known what was coming – and the use is wireless devices is coming faster than anybody expected, due in part to the success of smartphones. But we are on the edge of the Internet of Things needing gigantic bandwidth which will make cell phone data usage look tiny.

One thing the FCC has done that hurts the way we use the data is to chop almost every usable spectrum into a number of small channels. There are advantages to this in that different users can grab different discrete channels without interfering with other users, but the downside to small channels is that any given channel doesn’t carry much data. So one thing we need is some usable spectrum with broader channels.

The other way we can get out of the spectrum pinch is to reallocate more spectrum to wireless data and then let devices roam over a large range of spectrum. With software defined radios we now have chips that are capable of using a wide variety of spectrum and can change on the fly. So a smart way to move into the future is to widen the spectrum available to our wireless devices. If one spectrum is busy in a given local area the radios can find something else that will work.

Anybody who has ever visited a football stadium knows what it’s like when spectrum gets full. Practically nobody can get a connection and everybody is frustrated. If we are not careful, every downtown and suburban housing area is going to look like a stadium in terms of frequency usage, and nobody is going to be happy. We need to fix the spectrum mess and have a plan for a transition before we get to that condition. And it’s going to be here a lot sooner than anybody hopes.

Finding a Broadband Partner

Logo of the United States National Telecommuni...

Logo of the United States National Telecommunications and Information Administration, an agency in the Department of Commerce. (Photo credit: Wikipedia)

The NTIA issued a notice last week that asks if they should continue the BroadbandMatch website tool. This tool was created during the stimulus grant process and the original goal was to connect partners for applying or implementing the broadband grants. And the process worked. One of the components of the grants was the requirement for matching funds and there were many grant applicants with a great idea who had to find a partner to supply the matching funds. A significant percentage of the stimulus grants involved multiple parties and many of them found their partners using this tool.

On the NTIA tool a company would describe what they were trying to do and would describe the kind of partner they were looking for. And the main reason this worked was that the government was giving away billions of dollars for fiber construction, and so a lot of companies were looking for a way to get in on the action. Many of the companies involved in the grant process were new companies formed just go get the grants. The NTIA tool gave companies who were not historically in the telecom business a way to find potential partners

The NTIA asks if they should keep this service going, and if so how it ought to work. I will be the first to say that I was surprised that the tool was even still around since it was clearly designed to put together people to make stimulus grants work. The only way a tool like this can work now is if everybody in the industry knows about it and thinks to look there when they are interested in making an investment.

But I am going to guess that if I didn’t know that this tool was still active that hardly anybody else does as well. It was great for the purpose it was designed for, but one has to ask if this is going to be a place where companies look when they are seeking a partner. It has been my experience that outside that grant process, which was very public, that most people want to keep the process of forming new ventures as quiet as possible to avoid tipping the competition too early. And so, without the billions of public dollars that made the grants attractive I can’t see this tool being of much interest.

But this leads me to ask how a company can find a partner for a new telecom venture? The most normal type of partnership I see is one made between people with technical expertise looking for investors and people with cash looking for opportunities. So how do these kinds of partners find each other?

At CCG we have helped numerous carriers find partners and the following, in our experience, is what has worked and not worked:

  • Put out a formal request for a partner. This may mean issuing an RFP or an RFI or advertising somewhere to find interested parties. I have not found this process to be particularly fruitful, because it normally doesn’t uncover any potential partners that you didn’t already know.
  • Get to know your neighbors better. I have found that most partnerships end up being made by people in the same geographic area. It is not uncommon for the parties to not know each other well before the partnership, and sometimes they are even competitors. But there is a lot more chance that people in your region will best understand the potential for local opportunities.
  • Don’t be afraid to cross the line. Commercial CLECs and independent telephone companies are usually dismayed by municipalities that get into the telecom business. But generally those cities are just hungry for broadband and in almost every case they would prefer that a commercial provider come and build the infrastructure in their community. So crossing the line and talking to municipalities might uncover the best local partnership opportunities. If a town wants broadband badly enough (and many of them do) then they might be willing to provide concessions and cash to make it work.

Of course, this doesn’t even begin to answer the question of how to make a partnership work, which I will address in later blogs this week.

Should You Be Peering?

Google 貼牌冰箱(Google Refrigerator)

Google 貼牌冰箱(Google Refrigerator) (Photo credit: Aray Chen)

No, this is not an invitation for you to become peeping toms, dear readers. By peering I am talking about the process of trading Internet traffic directly with other networks to avoid paying to transport all of your Internet traffic to the major Internet POPs.

Peering didn’t always make a lot of sense, but there has been a major consolidation of web traffic to a few major players that has changed the game. In 2004 there were no major players on the web and internet traffic was distributed among tens of thousands of websites. By 2007 about 15,000 networks accounted for about half of all of the traffic on the Internet. But by 2009 Google took off and it was estimated that they accounted for about 6% of the web that year.

And Google has continued to grow. There were a number of industry experts that estimated at the beginning of this year that Google carried 25% to 30% of all of the traffic on the web. But on August 16 Google went down for about 5 minutes and we got a look at the real picture. A company called GoSquared Engineering tracks traffic on the web worldwide and when Google went down they saw an instant 40% drop in overall web traffic as evidenced by this graph: Google’s downtime caused a 40% drop in global traffic

And so, when Google went dead for a few minutes, they seem to have been carrying about 40% of the web traffic at the time. Of course, the percentage carried by Google varies by country and by time of day. For example, in the US a company called Sandvine that sells Internet tracking systems, estimates that NetFlix uses about 1/3 of the US Internet bandwidth between 9 P.M. and midnight in each time zone.

Regardless of the exact percentages, it is clear that a few networks have grabbed enormous amounts of web traffic. And this leads me to ask my clients if they should be peering? Should they be trying to hand traffic directly to Google, NetFlix or others to save money?

Most carriers have two major cost components to deliver their Internet traffic – transport and Internet port charges. Transport is just that, a fee that if often mileage based that pays for getting across somebody else’s fiber network to get to the Internet. The port charges are the fees that are charged at the Internet POP to deliver traffic into and out of the Internet. For smaller ISPs these two costs might be blended together in the price you pay to connect to the Internet. So the answer to the question is, anything that can produce a net lowering of one or both  of these charges is worth considering.

Following is a short list of ways that I see clients take advantage of peering arrangements to save money:

  • Peer to Yourself. This is almost too simple to mention, but not everybody does this. You should not be paying to send traffic to the Internet that goes between two of your own customers. This is sometimes a fairly significant amount of traffic, particularly if you are carrying a lot of gaming or have large businesses with multiple branches in your community.
  • Peer With Neighbors. It also makes sense sometime to peer with neighbors. These would be your competitors or somebody else who operates a large network in your community like a university. Again, there is often a lot of traffic generated locally because of local commerce. And the amount of traffic between students and a university can be significant.
  • Peering with the Big Data Users. And finally is the question of whether you should try to peer with Google, Netflix or other large users you can identify. There are several ways to peer with these types of companies:
    • Find a POP they are at. You might be able to find a Google POP or a data center somewhere that is closer than your Internet POP. You have to do the math to see if buying transport to Google or somebody else costs less than sending it on the usual path.
    • Peer at the Internet POP. The other way to peer is to go ahead and carry the traffic to the Internet POP, but once there, split your traffic and take traffic to somebody like Google directly to them rather than pay to send it through the Internet port. If Google is really 40% of your traffic, then this would reduce your port charges by as much as 40% and that would be offset by whatever charges there are to split and route the traffic to Google at the POP.

I don’t think you have to be a giant ISP any more to take advantage of peering. Certainly make sure you are peeling off traffic between your own customers and investigate local peering if you have a significant amount of local traffic. It just takes some investigation to see if you can do the more formal peering with companies like Google. It’s going to be mostly a matter of math if peering will save you money, but I know of a number of carriers who are making peering work to their advantage. So do the math.

Spying on our Internet Infrastructure

English: NSA EMPLOYEES ONLY Français : NSA emp...

English: NSA EMPLOYEES ONLY Français : NSA employés seulement (Photo credit: Wikipedia)

Everybody I know in the telecom industry has been following the controversy surrounding the allegations that the NSA has been gathering information on everybody’s Internet usage. What I find somewhat amusing are the smaller ISPs who are telling people that they have not cooperated with the NSA, and that it is ‘safe’ for customers to use them. That is a great marketing ploy but it far from the truth. The Internet infrastructure in the country is very complex, but for the most part the data traffic in the country can be characterized in three ways: core Internet, peering and private traffic.

The private data traffic is just that. There are huge numbers of private data connections in the country that are not part of the ‘Internet’. For example, every banking consortium has a private network that connects branches and ATMs. Large corporations have private connections between different locations within the company. Oil companies have private data circuits between the oil fields and headquarters. And for the most part the data on these networks is private. Most corporations that use private networks do so for security purposes and many of them encrypt their data.

The FBI has always been able to get a ‘wiretap’ on private data circuits using a set of rules called CALEA (Communications Assistance for Law Enforcement Act). The CALEA rules proscribe the processes for the FBI to use to wiretap any data connection. But over the years I have asked hundreds of network technicians if they have ever seen a CALEA request and from what I can see this is not a widely used tool. It would require active assistance from telecom companies to tap into private data circuits, and there just does not seem to be much of that going on. Of course, there is also not a lot of likelihood in finding spy-worthy information in data dumps between oil pumps and many of the other sorts of transactions that happen on private networks.

But the NSA is not being accused of spying on private corporate data. The allegations are that they are monitoring routine Internet traffic and that they possess records of every web site visited and every email that is being sent over the Internet. And it seems plausible to me that the NSA could arrange this.

The Internet in the US works on a hub and spoke infrastructure. There are major Internet hubs in Los Angeles, New York, Atlanta, Chicago, Dallas and Washington DC. Most of ‘Internet’ traffic ends up at one of these hubs. There are smaller regional hubs, but all of the Internet traffic that comes from Albuquerque, Phoenix, San Francisco, Las Vegas and all of the other cities in that region will end up eventually in Los Angeles. You will hear ISP technicians talking about ‘hops’, meaning how many different regional hubs an Internet transmission must go through before it gets to one of these Internet hubs.

So when some smaller Internet provider says that the NSA does not have their data they are being hopeful, naive or they are just doing PR. I recall an article a few months back where Comcast, Time Warner and Cox all said that they had not cooperated with the NSA and that it was safer to use their networks than to use AT&T and Verizon, who supposedly have cooperated. But everything that comes from the Comcast and Cox networks ends up at one of these Internet hubs. If the NSA has figured out a way to collect data at these hubs then there would be no reason for them to come to the cable companies and ask for direct access. They would already be gathering the data on the customers of these companies.

But then there is the third piece of the Internet, the peering network. Peering is the process of carriers handing data directly to each other rather than sending it out over the general Internet. Companies do this to save money. There is a significant cost to send information to and from the Internet. Generally an ISP has to buy transport, meaning the right to send information through somebody’s fiber cable. And they have to buy ‘ports’ into the Internet, meaning bandwidth connection from the companies that own the Internet portals in those major hubs. If an ISP has enough data that goes to Google, for example, and if they also have a convenient place to meet Google that costs less than going to their normal Internet hub, then they will hand that data traffic directly to Google and avoid paying for the Internet ports.

And peering is also done locally. It is typical for the large ISPs in large cities to hand each other Internet traffic that is heading towards each other’s network. Peering used to be something that was done by the really large ISPs, but I now have ISP clients with as few as 10,000 customers who can justify some peering arrangements to save money. I doubt that anybody but the biggest ISPs understand what percentage of traffic is delivered through peering versus directly through the more traditional Internet connections.

But the peering traffic is growing all of the time, and to some extent peering traffic can bypass NSA scrutiny at the Internet hubs. But it sounds like the NSA probably has gotten their hands on a lot of the peering traffic too. For instance, a lot of peering traffic goes to Google, and so if the NSA has an arrangement with Google then that catches a lot of the peering traffic.

There certainly are smaller peering arrangements that the NSA could not intercept without direct help from the carriers involved. For now that would be the only ‘safe’ traffic on the Internet. But if the NSA is at the Internet hubs and also has arrangements with the larger companies in the peering chains, then they are getting most of the Internet traffic in the country. There really are no ‘safe’ ISPs in the US – just those who haven’t had the NSA knocking on their doors.

Do You Understand Your Chokepoints?

Almost every network has chokepoints. A chokepoint is some place in the network that restricts data flow and that degrades the performance of the network beyond the chokepoint. In today’s environment where everybody is trying to coax more speed out of their network these chokepoints are becoming more obvious. Let me look at the chokepoints throughout the network, starting at the customer premise.

Many don’t think of the premise as a chokepoint, but if you are trying to deliver a large amount of data, then the wiring and other infrastructure at the location will be a chokepoint. We are always hearing today about gigabit networks, but there are actually very few wiring schemes available that will deliver a gigabit of data for more than a very short distance. Even category 5 and 6 cabling is only good for short runs at that speed. There is no WiFi on the market today that can operate at a gigabit. And technologies like HPNA and MOCA are not fast enough to carry a gigabit.

But the premise wiring and customer electronics can create a choke point even at slower speeds. It is a very difficult challenge to bring speeds of 100 Mbps to large premises like schools and hospitals. One can deliver fast data to the premise, but once the data is put onto wires of any kind the performance decays with distance, and generally a lot faster than you would think. I look at the recent federal announced goal of bringing a gigabit to every school in the country and I wonder how they plan to move that gigabit around the school. The answer mostly is that with today’s wiring and electronics, they won’t. They will be able to deliver a decent percentage of the gigabit to classrooms, but the chokepoint of wiring is going to eat up a lot of the bandwidth.

The next chokepoint in a network for most technologies is neighborhood nodes. Cable TV HFC networks, fiber PON networks, cellular data networks and DSL networks all rely on creating neighborhood nodes of some kind, a node being the place where the network hands off the data signal to the last mile. And these nodes are often chokepoints in the network due to what is called oversubscription. In the ideal network there would be enough bandwidth delivered so that every customer could use all of the bandwidth they have been delivered simultaneously. But very few network operators want to build that network because of the cost, and so carriers oversell bandwidth to customers.

Oversubscription is the process of bringing the same bandwidth to multiple customers since we know statistically that only a few customers in a given node will be making heavy use of that data at the same time. Effectively a network owner can sell the same bandwidth to multiple customers knowing that the vast majority of the time it will be available to whoever wants to use it.

We are all familiar with the chokepoints that occur in oversubscribed networks. Cable modem networks have been infamous for years for bogging down each evening when everybody uses the network at the same time. And we are also aware of how cell phone and other networks get clogged and unavailable in times of emergencies. These are all due to the chokepoints caused by oversubscription at the node. Oversubscription is not a bad thing when done well, but many networks end up, through success, with more customers per node than they had originally designed for.

The next chokepoint in many networks is the backbone fiber electronics that delivers bandwidth to from the hub to the nodes. Data bandwidth has grown at a very rapid pace over the last decade and it is not unusual to find backbone data feeds where today’s data usage exceeds the original design parameters. Upgrading the electronics is often costly because in some network you have to replace the electronics to all nodes in order to fix the ones that are full.

Another chokepoint in the network can be hub electronics. It’s possible to have routers and data switches that are unable to smoothly handle all of the data flow and routing needs at the peak times.

Finally, there can be a chokepoint in the data pipe that leaves a network and connects to the Internet. It is not unusual to find Internet pipes that hit capacity at peak usage times of the day which then slows down data usage for everybody on the network.

I have seen networks that have almost all of these chokepoints and I’ve seen other networks that have almost no chokepoints. Keeping a network ahead of the constantly growing demand for data usage is not cheap. But network operators have to realize that customers recognize when they are getting shortchanged and they don’t like it. The customer who wants to download a movie at 8:00 PM doesn’t care why your network is going slow because they believe they have paid you for the right to get that movie when they want it.

Google and Whitespace Radios

Image representing Google as depicted in Crunc...

Image via CrunchBase

Last week Google received approval to operate a public TV whitespace database. They are the third company after Telcordia and Spectrum Bridge to get this designation. The database is available at http://www.google.org/spectrum/whitespace/channel/index.html and is available to the public. With this database you can see the whitespace channels that are available in any given market in the country.

The Google announcement stems from a FCC order in April, 2012 in FCC Docket 12-36A1 which is attached. This docket established the rules under which carriers can use whitespace spectrum. Having an authorized public spectrum database is the first step for a company to operate in the spectrum.

You may have seen recent press releases that talk about how Google proposes to use tethered blimps to operate in the whitespace spectrum. They are calling this system ‘SkyNet’, a name that sends a few shiver up the spine of movie buffs, but the blimps are an interesting concept in that they will be able to illuminate a large area with affordable wireless spectrum. By having their database approved, Google is now able to test and deploy the SkyNet blimps.

The whitespace spectrum operates in the traditional television bands and consists of a series of 6‑megahertz channels that correspond to TV channels 2 through 51, in four bands of frequencies in the VHF and UHF regions of 54-72 MHz, 76-88 MHz, 174-216 MHz, and 470-698 MHz. Whitespace radio devices that will work in the spectrum are referred to in the FCC order as TVBD devices.

For a fixed radio deployment, meaning a radio always sitting at a home or business, a TVBD radio must be able to check back to the whitespace database daily to makes sure what spectrum it is allowed to use at any given location. Mobile TVBD radios have to check back more or less constantly. It is important for a radio to be able to check with the database because there are licensed uses available in these spectrums and a whitespace operator needs to always give up space to a licensed use of the spectrum as it arises.

This means that TVBD radios must be intelligent in that they need to be able to change the spectrum they are using according to where they are deployed. Whitespace radios are also a challenge from the perspective of radio engineering in that they must be able to somehow bond multiple paths from various available, yet widely separated channels in order to create a coherent bandwidth path for a given customer.

There are whitespace radios on the market today, but my research shows that they are still not particularly affordable for commercial deployment. But this is a fairly new radio market and this is part of the normal evolution one sees after new spectrum rules hit the market. Various vendors generally develop first generation devices that work in the spectrum, but the long-term success of any given spectrum generally depends upon having at least one vendor that finds a way to mass produce radios so that they can reduce the unit costs. There are some spectacular failures in several spectrums that have been released in the last few decades, such as MMDS, that failed due to never having reached the acceptance level of producing affordable devices.

But one might hope that Google will find the way to produce enough radios to make them affordable for the mass market. And then maybe we will finally get an inkling of Google’s long-term plans. There has been a lot of speculation about Google’s long term plans as an ISP due to their foray into gigabit fiber networks in places like Kansas City and Austin. And now, with SkyNet we see them making another deployment as an ISP in rural markets. If Google produces proprietary TVBD radios that they only use for themselves then one has to start believing that Google has plans to deploy broadband in many markets as an ISP as it sees opportunities. But if they make TVBD radios available to anybody who wants to deploy them, then we will all go back to scratching our heads and wondering what they are really up to.

I have a lot of clients who will be interested in whitespace radios if they become affordable (and if they happen to operate in one of the markets where there is enough whitespace channels available). Like others I will keep watching this developing market to see if there is any opportunity to make a business plan out of the new spectrum opportunity.

Make it Faster

Cable modem Motorola SurfBoard for broadband i...

Cable modem Motorola SurfBoard for broadband internet (Photo credit: Wikipedia)

Whenever I look at my client’s data products I almost have the same advice – make it faster. I am constantly surprised to find companies who deliver small bandwidth data products when their networks are capable of going much faster. I have come to the conclusion that you should give customers as much bandwidth as you technically can deliver, within any technical restraints.

I know that networks are operated largely by engineers and technicians and very often I hear the engineers warn management against increasing speeds. They typically are worried that faster speeds mean that customers will use more bandwidth. They worry that will mean more costs with no additional revenue to pay for the extra bandwidth.

But the experience in the industry is that customers don’t use more data when they get more speeds, at least not right away. Customers do not change their behavior after they get faster data – they just keep doing the same things they were doing before, only faster.

Of course, over time, internet data usage is steadily increasing on every network as customers watch more and more programming on the web. But they are going to increase usage regardless of the speed you deliver to them as long as that speed is fast enough to stream video. Going faster just means they can start watching content sooner without having to worry about streaming glitches.

The engineers do have one valid point that must be taken into consideration, in that many networks have chokepoints. A chokepoint is any place in a network that can restrict the flow of data to customers. Chokepoints can be at neighborhood nodes, within your network backbone, at devices like routers, or on the Internet backbone leaving your company. If your network is getting close to hitting a chokepoint you need to fix the issue because the data usage is going to grow independently of the speeds you give your customers. When I hear worry about chokepoints it tells me that the network needs upgrades, probably sooner rather than later.

Historically telecom companies were very stingy with data speeds. The first generations of DSL didn’t deliver speeds that were much faster than dial-up and even today there are many markets that still offer DSL with downloads speeds of 1 Mbps. Then cable modems came along and they upped speeds a little, with the first generation of cable modems offering speeds up to 3 Mbps. And over time the telcos and the cable companies increased data speeds a little, but not a lot. They engaged in oligopoly competition rather than in product competition. There are many notorious quotes by the presidents of large cable companies saying that their customers don’t need more speed.

But then Verizon built FiOS and changed the equation. Verizon’s lowest speed product when they launched service was 20 Mbps, and it was an honest speed, meaning that it delivered as advertised. Many of the DSL and cable modem speeds at that time were hyped at speeds faster than could be delivered in the network. Cable modems were particular susceptible to slowing down to a crawl at the busiest times of the evening.

Over time Verizon kept increasing their speeds and on the east coast they pushed the cable companies to do the same. Mediacom in New York City was the first cable company to announce a 50 Mbps data product, and today most urban cable companies offer a 100 Mbps product. However, the dirty secret cable companies don’t want to tell you is that they can offer that product by giving prioritization to those customers, which means that everybody else gets degraded a little bit.

And then came Google in Kansas City who set the new bar to 1 Gbps. Service providers all over the country are now finding ways to 1 Gbps service, even if it’s just to a few customers.

I am always surprised when I find a company who operates a fiber network which does not offer fast speeds. I still find fiber networks all the time that have products at 5 Mbps and 10 Mbps. In all of the fiber-to-the-premise technologies, the network is set up to deliver at least 100 Mbps to every customer and the network provider chokes the speeds down to what is sold to customers. It literally takes a flick of a switch for a fiber provider to change the speed to a home or business from 10 Mbps to 100 Mbps.

And so I tell these operators to make it faster. If you own a fiber network you have one major technological advantage over any competition, which is speed. I just can’t understand why a fiber network owner would offer speeds that are in direct competition with the DSL and cable modems in their market when they are capable of leaping far above them.

But even if you are using copper or coax you need to increase speeds to customers whenever you can. Customers want more speed and you will always be keeping the pressure on your competition.