We Don’t Have Enough Bandwidth

I read three different articles Friday that have a common theme – we just don’t have enough bandwidth in this country.

The first article from the Fiber To The Home Council which reports on a recent survey. They report that video viewing over the Internet is growing faster than expected, led by the viewing habits of the young. One third of young viewers watch video on a cell phone or tablet at the same time that they watch TV. And 12% of viewers under 35 report watching all of their content over the Internet.

The article also points out a recent report from Conviva, a web optimization company, who reports that they sampled 22.6 billion video streams and found that 60% of them suffered some degradation due to inadequate bandwidth.

The gist of the article is that demand keeps growing while many parts of the Web are near or at a breaking point in terms of capacity and quality. It’s also evidence that homes don’t want to just watch streaming video, they want to watch multiple streaming videos.

In another article Time Warner announced that it would roll out significantly faster Internet service, but only in competitive markets. The upgrades will come in markets where they are competing against fast competition, such as places where Verizon has built FiOS, where AT&T has relatively fast U-verse and where municipalities have built fiber networks. The company says that they will upgrade to DOCSIS 3 and also install much faster wireless routers. They also will upgrade the DVRs in these markets and roll out apps that are designed for the faster Internet.

But Time Warner also made it clear that they have no plans to upgrade markets where there is not fast competition. My take away from this article is that a lot of the incumbent providers are still only doing upgrades in response to direct competition. Otherwise they are quite satisfied with the status quo and only make investments under duress.

Finally, the citizens of Bergen County, New Jersey have started a petition to ask their politicians to offer whatever is necessary to attract Google fiber to the county. Bergen County is the most populous county in the state.

I find this somewhat surprising because most of the people in this county have Verizon FiOS available. And recently Verizon said they plan to have all of New Jersey covered by FiOS. Most of the rest of the country would be thrilled to be upgraded to the kinds of speeds available in Bergen County. FiOS speeds differ by market, but most markets have speeds available from 15 Mbps download to 150 Mbps download. And a few markets have 300 Mbps and 500 Mbps speeds available. Of course, Google would be bring 1 Gbps speeds for a little more than what people are paying for 50 Mbps from Verizon.

My takeaway from this is that people are beginning to realize how important very fast Internet service is. Even those who already have some of the fastest Internet speeds in the country do not view what they have as a value.

Unfortunately for the citizens of Bergen County I find it highly unlikely that Google will ever build to compete against another fiber network. Verizon could easily upgrade their network to compete with Google on speed and price and the conventional wisdom is that nobody is going to build a second fiber network to homes or both fiber owners will go broke competing against each other.

But all of these articles are indicative of the daily articles I see that continue to highlight the big gap between the bandwidth people want and what they are being offered in the market. We just don’t have enough bandwidth in the country, at least according to consumers.

Regulatory Alert: Rural Call Completion

Seal of the United States Federal Communicatio...

Seal of the United States Federal Communications Commission. (Photo credit: Wikipedia)

The FCC took action on October 28 to address a growing problem of calls that are not completed to rural areas. The Commission adopted new rules that are aimed to remedy a growing problem of calls that are not completed.

The FCC noted that the situation was “serious and unacceptable” and that every call that is placed should be terminated. The FCC note that “Whatever the reason, the consequences of failed calls can be life-threatening, costly, and frustrating. Rural businesses have reported losing customers who couldn’t call in orders, while families attempting to contact elderly relatives have worried when they hear a ring – but no one picks up on the other end because the call never actually went through.”

The FCC surmises several reasons for uncompleted calls:

  • They think that some providers are not routing to rural areas to avoid higher than average terminating access charge rates. The access rates in rural areas are still much higher than rates for major metropolitan areas, which reflects the higher cost of doing business in rural areas. Terminating rates can still be as much as two cents per minutes higher. However, the FCC has always said that it insists that every call must go through, and if they ever got evidence of a specific carrier boycotting an area due to high rates I suspect they would levy high fines.
  • They think that much of the problem is due to the fact that calls can be routed through multiple carriers. They note that the best industry practice is to limit to two the number of intermediate carriers involved in routing a call. I know there are a lot of new carriers in the market today, such as multiple new companies marketing voice services like IP Centrex who search for the lowest cost way to route calls. One has to suspect that the long distance carriers beneath some of these carriers have gotten very creative in terms of routing calls to save costs.
  • Some carriers have been sending a ring tone to the calling party before the call has actually been completed. One has to suspect that this is done so that the caller can’t hear all of the intermediate switching going on to get the call completed. The problem with doing this is that the caller will hang up after a few unanswered rings, often before the call has even been completed.

The FCC took several concrete steps to fix the problem. These new rules will be effective in a few weeks once the final rules are published. The new rules are:

  • False audible ringing is prohibited, meaning that a telephone provider cannot send a ringtone to the caller until the call has actually been answered.
  • Carriers with over 100,000 voice lines, and who are the carrier that determines how calls are routed must collect and retain calling data for a six month period.
  • Carriers who can certify that they follow best industry practices, such as not routing calls through more than two intermediate carriers, will be able to get a waiver for some or all of the storage and reporting requirements.
  • Carriers who can demonstrate that they have all of the mechanisms in place to complete rural calls can also ask for a waiver from the storage and reporting requirements.

The End of Special Access?

Image representing EarthLink as depicted in Cr...

Image via CrunchBase

For those not familiar with the term, special access refers to selling traditional data pipes on the TDM telecom networks. These are circuits like T1s and DS3s. While one might think the world had transitioned to ethernet circuits there are still huge numbers of these traditional circuits being sold in the world.

In many cases the traditional circuits, especially T1s are being sold because of lack of fiber in the distribution plant. TDS data circuits can still be delivered over copper in many cases and often are the only way for a business stuck on copper to get faster data speeds.

AT&T recently announced that they were going to do away with all of their long-term discounts on these traditional TDM circuits. Customers and other carriers have been used to buying these products with a significant discount for signing up for long periods of time. There have been discounts offered for agreements to buy for up to seven years. And these discounts have teeth since there are significant penalties for breaking the contracts. As of November 9 AT&T will not be signing any contracts with terms longer than three years.

AT&T says the reason they are doing away with the discounts is due to the fact that they are going to be discontinuing TDS special access by 2020. However, that rings untrue since somebody can still sign a 5-year or 7-year contract today and still have that contract finished on or before 2020.

Some of the competitors of AT&T filed a letter of complaint with the FCC this month complaining about the cessation of the term discounts. This included Sprint, tw telecom, CBeyond, EarthLink, Level3 and Megapath. These carriers say that eliminating the discounts is anticompetitive since they are the in direct competition with AT&T and they are the primary purchasers of special access circuits.

Sprint says that eliminating the term discounts will increase the prices they pay and ultimately affect what customers pay. They say that in the worst case examples that their costs will rise 24%.

If you have been following this blog I have reported that AT&T has been positioning itself to get out of the TDM business. They want to convert all data circuits to ethernet as part of their ‘Project VIP’ initiative. But they also want to get homes and small business off of copper and in many cases replace them with cell phones. The FCC has not given AT&T the permission to do this anywhere, yet they keep moving towards that goal.

The biggest problem I see with trying to eliminate TDM data circuits, particularly T1s, is that the customers who use them often are in parts of the network that don’t have fiber alternatives. It’s nice for AT&T to be able to talk about offering only ethernet, but in many cases this is going to result in customers losing what little data they are able to buy today.

There are still huge numbers of T1s that are used to support PBXs and small company WANs for functions like data back-up. It’s hard to picture what a customer will do if the copper goes away and they are expected to somehow perform those functions using cellular data – with data plans that are likely to be capped. We tend to think of a T1 these days as a small data pipe. But if you are using it for data backup, a T1 can transmit a lot of data during a month’s time.

The FCC is in the middle right now of looking at special access issues. They have issued a request for data from the industry that will hopefully help them understand the state of the current TDM data market. I think they are going to find that the market is still a lot larger than AT&T wants them to think.

The Future of Interconnection

A Verizon payphone with the Bell logo.

A Verizon payphone with the Bell logo. (Photo credit: Wikipedia)

AT&T and Public Knowledge both testified yesterday at a House Communications Subcommittee hearing about the transition of today’s PSTN to an all-IP network.

Both parties agreed that there were five areas that must be addressed to maintain a functional telephone network:

  • Service for everybody
  • Interconnection and competition
  • Consumer protection
  • Reliability
  • Public Safety

I want to look a little more at the issue of interconnection and competition. Today a large percentage of my clients have interconnection agreements with the incumbent telephone companies. Most of my clients are CLECs but a few are wireless carriers, and each negotiates interconnection under a different set of FCC rules.

Interconnection is vital to maintain competition. Interconnection basically covers the rules that define how voice traffic gets from one network to another. The agreements are very specific and each agreement defines precisely how the carriers will interconnect their networks and who will pay for each part of the network.

For the most part, the rules of Interconnection adopted as part of the Telecommunications Act of 1996 work well and there are probably over 2,000 companies using these agreements to interconnect with each other.

There is a lot of danger that changing the interconnection rules could harm and force competitive companies out of the market. Let me just revisit a little bit of history to talk about what I mean. A long time ago the FCC decided that interconnection for local calls between incumbents should be free, and so incumbent telephone companies don’t charge each other to exchange local minutes. However, I can think of at least five times during my career when the RBOCs like AT&T tried to put in reciprocal charges for this traffic. That means that both parties would pay each other the same amount for terminating local calls from the other. Sounds okay until you recall that AT&T basically serves all of the metro areas in the country while smaller telcos serve the rural areas. Still today there is a lot more calling made from rural areas into metros than in the other direction, and if such a change was made the rural companies would be sending big checks to the RBOCs for ‘free’ calls

And the RBOCs have tried to do similar things to competitive carriers with interconnection. The FCC’s interconnection rules say that a competitive carrier can choose to interconnect with a larger company at ‘any technically feasible point’, and yet every few years the RBOCs try to change interconnection agreements to force carriers to carry the traffic to the RBOC hubs. Again, this is a matter of money and the RBOCs want the competitive carriers to pay for everything.

Changing to an all-IP network is likely to open up the same battles. Rather than maintain a system today of many tandem offices in a state, it is not impossible that the RBOCs will have only one hub in each state, or even only one hub in each region of many states. And if they make that kind of change you can expect that they will then expect competitive carriers to pay to carry all if their traffic to and from such hubs. I can tell you that such a change would devastate the business plan of many competitive carriers and would greatly reduce competition in the country.

The FCC has to be diligent in making the changes to IP. Everybody agrees that the technological change needs to be made. It’s more efficient. But we can’t let a technology change be grounds for a land-grab by AT&T and Verizon in an attempt to quash competition. They will, of course, claim that they are not trying to do that, but during my 35-year career I have seen them try exactly that kind of change a whole lot of times. And there is no doubt in my mind they will try to do it again.

The Quiet Expansion of Wi-Fi Networks

Wi-Fi

Wi-Fi (Photo credit: kristinmarshall)

I am sure I am like most business travelers and one of the first things I look for when I get to a new place is a WiFi connection for both my laptop and cellphone. Finding WiFi lets me get online with the computer and stops me from racking up data charges on my cell plan.

And for the longest time there has been very little public WiFi outside of Starbucks and hotels. But that is starting to change, at least in some places. There are several companies that have quietly been pursuing w WiFi deployments.

The biggest of these is the cable companies. It’s hard to get accurate counts of how many hot spots they have deployed. In 2012 a consortium of cable companies  – Comcast, Cox, Time Warner, Bright House and Optimum – banded together as the Cable WiFi consortium to deploy hotspots. Comcast claims that the industry has deployed over 300,000 hot spots. However, the Cable WiFi web site claims over 200,000. But whatever the number this is far larger than anybody else.

The Cable WiFi networks are offered to the customers of those companies as a mobile data extension of their service. Today these hotspots are centered around big cities – the northeastern corridor, San Francisco, Chicago, Los Angeles, Tampa, Austin and others.

The next biggest provider is AT&T which claims about 30,000 hot spots. AT&T claims over 705 million WiFi connections onto its WiFi network in the fourth quarter of 2012. However, Google has announced that it is getting in the game and nobody knows how big they might get with this effort. But their first announcement is that they are taking over all of the hotspots at Starbucks Coffee (which is a lot of the AT&T hotspots).

The cable companies have been deploying the hotspots in several ways. In some communities they are installing them on utility poles. In other situations they are going into establishments similar to the Starbucks WiFi.

WiFi is becoming more and more important to people’s daily life, so this trend is going to be very popular. Cellphone plans are getting stingier and stingier with cellular data at the same time that cell phones and tablets have the ability to use more and more data. If that data is not offloaded onto WiFi networks then customers are facing some gigantic cellphone bills.

WiFi is never going to be a replacement of cellular. For example, the technology used and the spectrum used make it very difficult to do dynamic handoffs like happens with your cell phone. You can literally walk out of WiFi coverage on foot where cellular coverage will stick with you driving at speeds of 60 miles per hour.

But people are finding more and more uses for WiFi all of the time, and so the desire for public WiFi is probably going to explode. The cable companies report that every time they open a new hot spot that usage explodes soon after people figure out it is available. One area where they have seen the biggest use is at the Jersey shore where vacationers and visitors are relieved to find WiFi available.

Anybody building a fiber network ought to consider a wireless deployment. There are several ways to monetize the investment. The obvious revenue from WiFi is through daily, weekly and monthly usage fees. But if you are a triple play provider, a more subtle benefit of wireless is in making your customers stickier since you are giving them a mobile component of their data service. Another revenue stream is to sell prioritized WiFi access to the local municipality, electric company and others, with priority meaning that their employees get a prioritized access to the network, with first responders trumping everybody else. There are also smaller revenue streams such as earning commissions on the DNS traffic for people who purchase products over your WiFi network.

The Future of Rural Broadband

Verizon Wireless "Rule the Air" Ad C...

Verizon Wireless “Rule the Air” Ad Campaign (Photo credit: Wikipedia)

There were several events this week that are telling rural subscribers the future of rural broadband. It is a bleak picture.

First, at a Goldman Sachs conference on Tuesday, the CEO of AT&T said that he hoped that the new FCC chairman Tom Wheeler would be receptive to AT&T’s desire to begin retiring its copper network in favor of its wireless network. At the end of last year AT&T had said in an FCC filing that they were going to be seeking to retire the copper plant from ‘millions of subscribers’.

In that filing AT&T had asked to move from the copper network to an all-wireless all-IP network. Stephenson said that cost savings from getting rid of the copper network would be dramatic.

On that same day, Verizon CEO Lowell McAdam said that the idea of offering unlimited data plans for wireless customers was not sustainable and defied the laws of physics. Earlier this year Verizon had ended all of its unlimited wireless data plans and now has caps on every plan.

Verizon already has a rural wireless-based landline surrogate product that it calls VzW. This uses the 4G network to deliver a landline phone and data anywhere that Verizon doesn’t have landline coverage. The base plan is $60 per month and includes voice and 10 gigabytes of data. Every extra gigabyte costs $10. There is an option to buy a $90 plan that includes 20 gigabytes or $120 for 30 gigabytes.

Finally, at the same Goldman Sachs conference mentioned above, the CFO of Time Warner said that they saw more room for increasing data rates.

So what does all of this mean for rural subscribers? First, it means that if you are served by a large incumbent like AT&T that they are going to be working hard to retire your copper and force you onto wireless. And we all know that the wireless data coverage in rural America is not particular fast when you can even get data. The data speeds delivered from a cell tower drop drastically with distance. In urban areas where towers are only a mile or less apart this doesn’t have much practical effect. But in a rural environment a mile is nothing and homes might be a mile apart. People lucky enough to live near to a cell tower can probably get okay data speeds, but those further away will not.

And even if you can get wireless data your usage is going to be capped. Rural landline data usage today may be slow, but it is unlimited. Customers have learned that if they put in WiFi routers that they can channel all of the data usage on their cell phones and tablets to their unlimited landline data connections. But once those connections are wireless, then every byte of data leaving your home, whether directly from a device or though the WiFi router, is going to count against the data caps. So rural America can expect a future where they will have data caps while people in urban areas will not.

Finally, one can expect the price of data to keep climbing. I have been predicting this for a decade. The large telcos and cable companies are facing a future where the old revenues streams of voice and cable TV are starting to decline. The only sustainable product they have is data. And so as voice and cable continue to tumble, expect incumbents to get into the habit of raising data prices every year to make up for those declines. Competition won’t help because the cell company data is already expensive, and both the incumbent cable and telcos will be raising data rates together.

This is not a pretty picture for a rural subscriber. Customers will be forced from copper to wireless. Speeds are not likely to get much faster. Data is going to be capped and prices will probably be increased year after year.

The DSL TV Market

CenturyLink Contingent

CenturyLink Contingent (Photo credit: sea turtle)

I find it surprising that DSL TV providers have been the fastest growing segment of the cable TV industry. And my surprise is due to the fact that these companies are delivering TV over the smallest data pipe of any of the comparable technologies. Over the last year the companies using DSL and fiber to deliver cable TV have grown in customers while the traditional cable companies have lost customers.

Cable TV is delivered over DSL using a bonded pair of telephone wires using either ADSL2 or VDSL. In theory these technologies can deliver speeds up to about 40 Mbps. But depending upon the gauge, the age and the condition of the copper many actual deployments are closer to 20 Mbps than the theoretical 40 Mbps. The bandwidth that is left over after the TV signal is used to deliver voice and data.

The DSL providers make cable work by using a technology called IPTV. This technology only sends the signals to the home that the customer is asking to see. One can always tell that you are on an IPTV system because of the small pause that occurs every time you change channels.

The DSL cable industry is composed of AT&T U-verse, CenturyLink Prism and a whole slew of smaller telephone companies. Not every telco has taken the bonded DSL path. For example, a number of the mid-sized telcos like Frontier, Fairpoint and TDS have elected to partner with a satellite provider in order to have a TV product in the bundle. But last year TDS ventured out into the DSL TV market in Madison Wisconsin.

AT&T is by far the most successful DSL TV provider as one would expect from their large customer base. AT&T has made the product available to over 24 million homes. At the end of the first quarter of 2013 they reported having 5 million cable customers on U-verse and 9.1 million data customers.

The biggest problem with using DSL is the distance limitation. The speeds on DSL drop significantly with distance and so customers have to be on a relatively short copper path in order for it to work. The DSL that AT&T is using can support the U-verse product up to about 3,500 feet on good single copper pair and up to 5,500 feet using a two bonded copper pairs. And the key word in that description is good copper, because older copper and copper with problems will degrade the speed of the product significantly.

I really don’t know who is in second place. CenturyLink announced that they had 120,000 TV customers on their Prism product at the end of the first quarter of 2013. There may be some other telcos out there with more DSL cable customers. But CenturyLink if fairly new to the product line having launched it just a few years ago. They still only offer it in a few markets but are adding new markets all of the time. So if they are not in second base they soon will be.

In researching this article I came across some web sites that carry customer complaints about Prism. Look at the Yelp pages for CenturyLink in Las Vegas. I’ve always suspected that unhappy customers are more likely to post an on-line review than happy ones, but some of the stories in here are extraordinarily bad. Obviously CenturyLink is having some growing pains and has a serious disconnect between their marketing and sales departments and their customer service. But some of the policies in here, such as charging people a large disconnect fee even though there is no contract is surprising in a competitive environment. And yet, even with these kinds of issues the company has added over 100,000 customers in just a few years.

I have to wonder how this industry segment is going to handle where the cable business is going. How much they can squeeze out of a 20 Mbps data pipe when you have customers who want to watch several TVs at the same time, record shows while watching another show and also streaming video to tablets and laptops, all simultaneously? Yesterday I noted the new trend in large TVs which is to split the screen into four parts, each showing something different. Most reviews of the performance of TV over DSL are pretty good, but how will DSL handle the guy who wants to watch four HD football games at the same time while surfing the internet?

Do You Understand Your Chokepoints?

Almost every network has chokepoints. A chokepoint is some place in the network that restricts data flow and that degrades the performance of the network beyond the chokepoint. In today’s environment where everybody is trying to coax more speed out of their network these chokepoints are becoming more obvious. Let me look at the chokepoints throughout the network, starting at the customer premise.

Many don’t think of the premise as a chokepoint, but if you are trying to deliver a large amount of data, then the wiring and other infrastructure at the location will be a chokepoint. We are always hearing today about gigabit networks, but there are actually very few wiring schemes available that will deliver a gigabit of data for more than a very short distance. Even category 5 and 6 cabling is only good for short runs at that speed. There is no WiFi on the market today that can operate at a gigabit. And technologies like HPNA and MOCA are not fast enough to carry a gigabit.

But the premise wiring and customer electronics can create a choke point even at slower speeds. It is a very difficult challenge to bring speeds of 100 Mbps to large premises like schools and hospitals. One can deliver fast data to the premise, but once the data is put onto wires of any kind the performance decays with distance, and generally a lot faster than you would think. I look at the recent federal announced goal of bringing a gigabit to every school in the country and I wonder how they plan to move that gigabit around the school. The answer mostly is that with today’s wiring and electronics, they won’t. They will be able to deliver a decent percentage of the gigabit to classrooms, but the chokepoint of wiring is going to eat up a lot of the bandwidth.

The next chokepoint in a network for most technologies is neighborhood nodes. Cable TV HFC networks, fiber PON networks, cellular data networks and DSL networks all rely on creating neighborhood nodes of some kind, a node being the place where the network hands off the data signal to the last mile. And these nodes are often chokepoints in the network due to what is called oversubscription. In the ideal network there would be enough bandwidth delivered so that every customer could use all of the bandwidth they have been delivered simultaneously. But very few network operators want to build that network because of the cost, and so carriers oversell bandwidth to customers.

Oversubscription is the process of bringing the same bandwidth to multiple customers since we know statistically that only a few customers in a given node will be making heavy use of that data at the same time. Effectively a network owner can sell the same bandwidth to multiple customers knowing that the vast majority of the time it will be available to whoever wants to use it.

We are all familiar with the chokepoints that occur in oversubscribed networks. Cable modem networks have been infamous for years for bogging down each evening when everybody uses the network at the same time. And we are also aware of how cell phone and other networks get clogged and unavailable in times of emergencies. These are all due to the chokepoints caused by oversubscription at the node. Oversubscription is not a bad thing when done well, but many networks end up, through success, with more customers per node than they had originally designed for.

The next chokepoint in many networks is the backbone fiber electronics that delivers bandwidth to from the hub to the nodes. Data bandwidth has grown at a very rapid pace over the last decade and it is not unusual to find backbone data feeds where today’s data usage exceeds the original design parameters. Upgrading the electronics is often costly because in some network you have to replace the electronics to all nodes in order to fix the ones that are full.

Another chokepoint in the network can be hub electronics. It’s possible to have routers and data switches that are unable to smoothly handle all of the data flow and routing needs at the peak times.

Finally, there can be a chokepoint in the data pipe that leaves a network and connects to the Internet. It is not unusual to find Internet pipes that hit capacity at peak usage times of the day which then slows down data usage for everybody on the network.

I have seen networks that have almost all of these chokepoints and I’ve seen other networks that have almost no chokepoints. Keeping a network ahead of the constantly growing demand for data usage is not cheap. But network operators have to realize that customers recognize when they are getting shortchanged and they don’t like it. The customer who wants to download a movie at 8:00 PM doesn’t care why your network is going slow because they believe they have paid you for the right to get that movie when they want it.

Remember the White Pages?

NSW Telphone Directory_March 1944_042

NSW Telphone Directory_March 1944_042 (Photo credit: MargaretBee)

Earlier this year the Virginia State Corporation Commission granted an interim waiver for Verizon to be able to stop distributing residential white pages. This makes Virginia one of the last states to do this. This waiver came with the same kinds of requirements that we’ve seen in other states. Verizon must make sure that the information that was available in the white pages is available on its website and on the website of SuperMedia. Consumers who still want the white pages must be able to order them either in paper of CD format.

Most of the states have allowed the larger LECs like Verizon and AT&T to stop delivering white page directories with the same sorts of caveats. AT&T has reported that in all of the states where they have been able to get out of the white page business that only about 2% of customers still ask for a paper copy of the books. All of the phone companies are still publishing business white pages and there they report there is good demand for those listings.

The drive to ban the white pages was driven by both the phone companies and by consumer groups. Thinking of the big push to ban the white pages made me remember this funny YouTube video from 2008:

We certainly are only a few years away from a time when white pages will be a memory shared only by us old timers. Back in 2008 there was a Harris poll that showed that only 11% of households had any interest in the white pages in paper or even on-line format. One has to imagine that the growth of cell phones since then has to have nearly eliminated that requirement since our cell phones now act as our personal directories of people we want to remember.

Consumer groups have now turned their attention to the yellow pages. Since the yellow page industry makes a huge profit the telcos don’t agree with any push to ban yellow pages. The Local Search Association (formerly the Yellow page Association) is the national trade group representing the publishers of yellow pages. It has created a system in most places where customers can opt-out from receiving yellow pages. Consumers can go to https://www.yellowpagesoptout.com/ and can opt out of yellow pages for three years at a time.

Unless a telco seeks permission from the state commission to get out of the white page business or else shares white pages with a larger LEC it is still required to publish the white pages. I still have a lot of clients that publish their own directories that include residential white pages. But most of these directories are not the giant doorstops that are published in metropolitan areas. Instead they are small local books that include the white and yellow pages combined and are mostly still well-received by customers.

A lot of the yellow-page business has moved on-line and the industry is now embroiled in the same kinds of issues that affect other companies that live on advertising like Google. A big current push this year is for Do Not Track legislation that would allow consumers the ability to opt-out of being tracked by web advertisers. One thing about the yellow pages was that it didn’t track who you were and what you searched for.

Is There any Life Left in Copper?

RG-59 coaxial cable A: Plastic outer insulatio...

RG-59 coaxial cable A: Plastic outer insulation B: Copper-clad aluminium braid shield conductor C: Dielectric D: Central conductor (copper-clad steel) (Photo credit: Wikipedia)

Copper is still a very relevant technology today, and when looked at on a global scale nearly 2/3 of all broadband subscribers are still served by copper. That percentage is smaller in the US, but this country has a far more widely deployed cable TV system than most of the rest of the world.

The most widely deployed DSL technologies today are ADSL2 and VDSL. In theory these technologies can get speeds up to about 40 Mbps. But depending upon the gauge, the age and the condition of the copper many actual deployments are closer to 20 Mbps than the theoretical 40 Mbps.

ADSL2 and VDSL technology has been widely deployed by AT&T in its U-verse product which serves over 7 million data customers and over 4.5 million cable customers. AT&T has made the product available to over 24 million homes. AT&T can support the product up to about 3,500 feet on good single copper pair and up to 5,500 feet using a two bonded copper pairs.

And ADSL2 is a pretty decent product. It can deliver IPTV and still support an okay data pipe. However, as the cable companies are finding ways to get more bandwidth out of their coaxial cable and as new companies are deploying fiber, these DSL technologies are going to again fall behind the competition.

So what is out there that might resurrect copper and make speeds faster than ADSL2? Not too long ago I wrote a blog about G.Fast, which is Alcatel-Lucent’s attempt to find a way to get more speeds out of legacy copper networks. In recent field tests ALU achieved a maximum speed of 1.1 Gbps over 70 meters and 800 Mbps over 100 meters for brand new copper. On older copper the speed dropped to 500 Mbps for 100 meters.

However, the G.Fast distance limitations are far shorter than ADSL2 and G.Fast is really more of a drop technology than a last mile technology and it would require a telco like AT&T to build a lot more fiber to get even closer to houses. You have to wonder of it makes any sense to rebuild the copper network to be able to get up to 500 Mbps out of copper when fiber could deliver many gigabits.

There are other technologies that have been announced for copper. Late last year Genesis Technical Systems announced a scheme to get 400 Mbps out of copper using a technology they are calling DSL Rings. This technology would somehow tie 2 to 15 homes into a ring and bridge them with copper. Details of how the technology works are still a little sketchy.

In 2011 the Chinese vendor Huawei announced a new technology that will push up to 1 gigabit for 100 meters. This sounds very similar to G.Fast and sounds like a way to use existing copper within a home rather than rewiring.

There is one new technology that is finally getting wider use which is bonded VDSL pairs that use vectoring. Vectoring is a noise cancellation technology that works in a way similar to how noise-cancelling headphones work to eliminate sound interference. Vectoring eliminates most of the noise between bonded pairs of copper. Alcatel-Lucent hit the market with bonded pair VDSL2 in late 2011 that can deliver up to 100 Mbps. However, in real deployment speeds are reported to be 50 Mbps to 60 Mbps on older copper. That is enough speed to probably give another decade to DSL, although to do so requires a full replacement of old technology DSL technology with VDSL2. One has to wonder how many times the telcos will keep upgrading their copper electronics to get a little more speed rather than taking the leap to fiber like Verizon did.

One only has to take a look at the growth rate of the data used at homes and ask how long copper can remain relevant. Within a few short decades we have moved from where homes could get by on dial-up and now find a 20 Mbps connection too slow. Looking just a few years forward we see the continued growth of video sharing and a lot of new traffic from cellular femtocells and the Internet of Things. It’s hard to think that it won’t be long until people are bemoaning the inadequacy of their 50 Mbps connections. But that day is coming and probably is not more than a decade away.