Cable Company Gigabit

We are starting to get a look at what a gigabit product from the cable companies might look like. Late last year Comcast rolled out a gigabit product in parts of Atlanta, Detroit, Nashville and Chattanooga. They are now rolling implementation across the country and the company says that gigabit speeds will be available in all markets by 2018.

Comcast has elected to make the upgrades by implementing DOCSIS 3.1 technology on their networks. This technology allows the network to bond together numerous empty channels on the cable system to be used for broadband.

In markets where there is competition with Google Fiber or another fiber provider, the Comcast product is being sold at an introductory price of $70 per month with a 3-year contract. Month-to-month pricing without the contract is $140 per month. In reading group discussion websites where Comcast customers chat it sounds like there are already many markets where the $70 contract price is not available. I have read some customers say they have gotten prices at $110 to $120 per month, so perhaps the company is flexible with those willing to wade through the customer service maze and willing to sign a term contract.  

The current Comcast product delivers up to 1 Gbps download and 35 Mbps upload. You can expect Comcast to make future upgrades that will improve the upload speeds – but that upgrade is not included in this first generation of DOCSIS 3.1 technology. For now the upload speeds will be a barrier to any application that needs fast upload speeds.

The new technology also requires new hardware, meaning a new cable modem and a new WiFi router capable of handling the faster data speeds. So expect the price to be bumped higher to rent the hardware.

It’s hard to imagine that many customers are going to pony up more than $150 per month to get a gigabit connection and modem. When Google Fiber first introduced $70 gigabit to Kansas City (and when that was their only product), there were reports that there were neighborhoods where as many as 30% of the households subscribed to the gigabit product. But Google has a true $70 price tag and didn’t layer on fees for a modem or any other fees, like Comcast is surely going to do. It’s hard to imagine many customers agreeing to a 3-year contract for the gigabit product in competitive markets if they can buy it from somebody else without the contract. But perhaps Comcast will offer bundling incentives to pull the real cost under $70.

But we know when there are more choices that most customers will opt for the lowest-price product that they think is adequate for their needs. For example, when Google Fiber came to Atlanta they also had a 100 Mbps product for $50 per month and it’s likely that most customers chose that product rather than paying extra for the gigabit.

The Comcast pricing might reflect that Comcast doesn’t want to implement too many high-bandwidth customers at the same time. While DOCSIS 3.1 increases the size of the data pipes available to customers, it doesn’t make any significant improvements in the last mile network. To the extent that high-bandwidth customers use a lot more data, too many gigabit customers in a cable company node could degrade service for everybody else. But it’s likely that most gigabit customers don’t use a lot more data than 100 Mbps subscribers – they just get things done more quickly. But I am sure that Comcast still has worries about having too many high-bandwidth customers in the network.

Comcast and other cable companies are seeing more competition. For example, CenturyLink is selling $85 gigabit service in many western cities and passed about 1 million homes with fiber last year. Verizon FiOS just increased their data speeds in their fiber markets – not quite to a gigabit yet, but at ranges up to half a gigabit. But in the vast majority of the country the cable companies are not going to have significant competition with any foreseeable future.

FCC Commissioner Michael O’Reilly said a few weeks ago that ultrafast broadband is a marketing gimmick. While he was even referring to 100 Mbps broadband as a gimmick, it’s hard to not agree with him that a residential gigabit bandwidth product priced above $150 per month is more gimmick than anything else. There can’t be that many households in any market willing to pay that much extra just for the prestige of saying they have a gigabit.

But over time the prices will drop and the demand for bandwidth will grow and a decade from now there will be a significant portion of the market clamoring for an affordable gigabit product. Remember that we’ve seen this same thing happen a number of times in the past. I remember the big deal the cable companies made when they first increased speeds to 15 Mbps. The funny thing is that the market has a way of filling faster data pipes, and the day will come sooner than we expect where many households will legitimately want and need gigabit data pipes.     

The Beginning of the End for HFC?

coax cablesWe’ve spent the last few years watching the slow death of telephone copper networks. Rural telcos all over the country are rapidly replacing their copper with fiber. AT&T has made it clear that they would like to get out of the copper business and tear down their old copper networks. Verizon has expressed the same but decided to sell a lot of their copper networks rather than be the ones to tear them down. And CenturyLink has started the long process of replacing copper with fiber and passed a million homes with fiber in urban areas in 2016.

Very oddly, the dying copper technology got a boost when the FCC decided to award money to the big rural copper owners like Frontier, CenturyLink and Windstream. These companies are now using CAF II money to try to squeeze one more generation of life out of clearly old and obsolete copper. Without that CAF II money we’d be seeing a lot more copper replacement.

I’ve been in the telco industry long enough to remember significant new telco copper construction. While a lot of the copper network is old and dates back to the 50s and 60s, there was still some new copper construction as recently as a decade ago, with major new construction before that. But nobody is building new telco copper networks these days, which is probably the best way to define that the technology is dead – although it’s going to take decades for the copper on poles to die.

This set me to thinking about the hybrid coaxial networks (HFC) operated by the cable companies. Most of these networks were built in the 60s and 70s when cable companies sprang up in urban areas across the country. There are rural HFC networks stretching back into the 50s. It struck me that nobody I know of is building new HFC networks. Sure, some cable companies are still using HFC technology to reach a new subdivision, but nobody would invest in HFC for a major new build. All of the big cable companies have quietly switched to fiber technology when they build any sizable new subdivision.

If telco copper networks started their decline when companies stopped building new copper networks, then we have probably now reached that same turning point with HFC. Nobody is building new HFC networks. What’s hanging on poles today is going to last for a while, but HFC networks will eventually take the same path into decline as copper networks.

There will be a lot of work and money poured into keeping HFC networks alive. Cable companies everywhere are looking at upgrades to DOCSIS 3.1 as a way to get more speeds out of the technology – much in the same way that DSL prolonged copper networks. The big cable companies, in particular, don’t want to spend the capital dollars needed to replace HFC with fiber – Wall Street will punish any cable company that tries to do so.

Cable networks have a few characteristics that give them a better life than telephone copper. Having the one giant wire in an HFC network is superior to having large numbers of tiny wires in a copper network which go bad one-by-one over time.

But cable networks also have one big downside compared to copper networks – they leak interference into the world and are harder to maintain. The HFC technology uses radio waves inside the coaxial cable as the method to transmit signal. Unfortunately, these radio waves can leak out into the outside world at any place where there is a break in the cable. And there are huge numbers of breaks in an HFC network – one at every place where a tap is placed to bring a drop to a customer. Each of the taps and other splices in a cable network are sources of potential frequency leakage. Cable companies spend a lot every year cleaning up the most egregious leaks – and as networks get older they leak more.

Certainly HFC networks are going to be around for a long time to come. But we will slowly start seeing them replaced with fiber. Altice is the first cable company to say they will be replacing their HFC network with fiber over the next few years. I really don’t expect the larger cable companies to follow suit and in future years we will be deriding the networks used by Comcast and Charter in the same way we do old copper networks today. But I think that somewhere in the last year or two we saw the peak of HFC, and from that point forward the technology is beginning the slow slide into obsolescence.

Technology Hype

coax cablesI find it annoying when I read short articles that proclaim that a new technology that can deliver faster data speeds is right around the corner. This has most recently happened with 5G cellular, but in the past there have been spates of such articles talking about cable modem speeds with DOCSIS 3.1, and faster copper speeds with G.Fast.

It’s always easy to understand where such articles come from. Some vendor or large ISP will announce a technical breakthrough in a lab, and then soon thereafter there are numerous articles written by non-technical people proclaiming that we will soon be seeing blazing speeds at our homes or on our cell phones.

But these articles are usually premature, and sadly there are real-life consequences to this kind of lazy press. Politicians and policy makers see these articles and accept them as gospel and make decisions based upon these misleading articles. It then is up to people like me to come behind and explain to them why the public claims are not true.

This is happening right now with talk about blazingly fast millimeter wave radios to replace fiber loops. Even if this technology were ready for market tomorrow (which it won’t be), like any technology it will have limits. There are places where wireless loops might be a great solution but other places where it may never be financially or technically feasible. Yet a whole lot of the country now believes that our future broadband is dependent upon gigabit wireless, and this is quashing plans for building fiber networks.

One recent set of these kinds of articles proclaimed that DOCSIS 3.1 is going to bring everybody gigabit speeds over cable company networks. And there is some truth to that, but the nuances are never explained. There are a lot of changes needed in a cable network to bring gigabit speeds to all of their customers. What is really happening in the first upgrade is that cable networks will have limited gigabit capabilities. The companies will be able to deliver gigabit speeds to perhaps hundreds of people in a market. Their networks would have problems if they tried to deliver it to thousands, and their networks would crash if they tried to give fast speeds to everybody.

Consider the list of issues that must be overcome to use a cable network to bring gigabit speeds to the masses:

  • First a cable company has to free up enough empty channels to make room for the gigabit data channels. For many cable system this will require upgrading the overall bandwidth of the cable network, and this can be very expensive. In the most extreme cases it can mean replacing all of the network amplifiers and power taps and even sometimes replacing some of the coaxial cable.
  • Cable bandwidth is shared by all of the customers in a neighborhood (called a node). If a cable company only sells a few gigabit products in a given node there will be some small degradation of bandwidth performance for everybody else. But if enough customers want to buy a gigabit the cable company will be forced to ‘split’ the nodes so that there are fewer homes sharing the bandwidth. Cable companies today have nodes of 200 – 300 customers, compared to fiber network nodes that generally range between 16 and 32 customers per node. A cable company has to build more fiber and install more electronics to get nodes as small as fiber systems.
  • Every network has chokepoints, or places where only a set amount of bandwidth can be handled at the same time. There are several of these chokepoints in a cable network – at the node, on the data pipe serving the node, at several data concentration points within the headend, and with the pipe to the outside Internet. You can’t upgrade speeds without upgrading these chokepoints, and that can be expensive.
  • At some point if enough customers want fast speeds the network would need to be fundamentally reconfigured to a new technology. This might mean converting the whole headend and electronics to IPTV. It might mean moving the CMTS (the device that talks to the data at each node) into the field, similar to a fiber network. And it would mean building a lot more fiber, to the point where there would almost be as much fiber as in a fiber-to-the-premise network.

There is always some truth in these technological pronouncements. But these articles are way off base when they then imply that a given breakthrough is the end-all solution to broadband. Yes, cable systems can be faster now, which is great. But DOCSIS 3.1 does not make a cable network equivalent to a Google Fiber network that can already deliver a gigabit to everybody. And yes, there is great promise in wireless local loops. But even after all of the issues with deploying wireless in a real-life environment are solved, the technology is only going to work where there is fiber fairly close to customers and when a number of other factors are just right. These kind of nuances matter and I really wish that non-techie writers would stop telling us that the solution to all of our broadband speed problems is right around the corner. Because it’s not.

Industry Shorts – August 2016

ATTThe following are a few topics I which found interesting but don’t require a full blog entry:

FCC to Allow Cable Black-outs. The FCC has officially decided that it is not going to intervene in the many disputes we see these days between programmers and cable operators. Only a few years ago this was a fairly rare occurrence, but you can’t read industry press without seeing some new dispute – many of which are now leading to content black-outs when the two sides can’t reach a resolution.

The FCC has always been allowed to intervene in disputes and routinely did so a decade ago. The American Cable Association which represents small and medium cable companies wants the FCC to be more active today to protect against abuses by the programmers, but the agency has decided to let the market work to resolve disputes. There have been over 600 blackouts since 2010 and the frequency seems to be accelerating.

Blogger Loses Life’s Work. Google recently hit the news when it disabled access to 14 years of blogs as well artwork, photograph, a novel and even the Gmail account that was being stored online by Dennis Cooper. The blogger claims he received no notice until his work disappeared and Google won’t tell him why he was cut off or if his content still exists. Cooper’s blog always contained controversial content and was a popular destination for fans of experimental literature and avant-garde writing.

His case highlights the intersection of first amendment rights versus the ability of private corporations like Google to allow or not allow content on their private platforms. Google has slowly been cutting back on storage services such as Google News Drives and Google Groups and Cooper’s content might not even still exist. If anything, this case highlights the importance of backing up content offline. It also raises the issue of how permanent anything is on the web.

AT&T Testing Drone Cell Sites. AT&T has been testing the use of drones as flying cell sites to use during big events. Large events always overwhelm local cellular sites and drones might be the answer to give access to many people in a concentrated area.

The company has already been using a technology that it calls COWs (Cells on Wheels) that are brought to large sporting events to provide more coverage. But the hope is that drones can be deployed more quickly and for a lower cost and provide better service. Of course, this just means more of a phenomenon I’ve seen a few times in recent years where people in the stands at a football game are watching the same game on their cellphone instead of looking at what is in front of them.

Huawei Creates 10 Gbps Cable Platform. We are in the earliest stages of deployment of gigabit broadband using DOCSIS 3.1 on cable systems and Chinese vendor Huswei claims to have already created a 10 Gbps platform using the new standard.

The company faces several hurdles to deploying the technology in the US since the company is under scrutiny by the US for doing business with North Korea and with Iran during the recent embargo. But the biggest issue with a cable company offering gigantic bandwidth over coaxial cable is freeing up enough bandwidth in a cable TV network to do so. Cable companies have to free up at least 24 empty channels to offer a gigabit over coax and it seems unlikely that are willing to try to open up a lot more channels than that for higher bandwidth. The only realistic scenario for going much larger than a gigabit is to migrate a cable network to IPTV and make the whole network into a big data pipe – but this is a very costly transition that means a new headend and new settop boxes. .

Facebook Develops Mobile Access Point. Facebooks has developed a shoebox size access point that can support wireless transmissions including 2G, LTE and WiFi. The box is hardened for the harshest conditions, is relatively low-powered and is intended as a way to expand Internet coverage around the world in poorer areas. Most of the world now connects with the Internet wirelessly and this access point can enable customers with a wide range of devices to gain access.

 

The Real Impact of Network Neutrality

Network_neutrality_poster_symbolThe federal appeals court for Washington DC just upheld the FCC’s net neutrality order in its entirety. There was a lot of speculation that the court might pick and choose among the order’s many different sections or that they might like the order but dislike some of the procedural aspects of reaching the order. And while there was one dissenting option, the court accepted the whole FCC order, without change.

There will be a lot of articles telling you in detail what the court said. But I thought this might be a good time to pause and look to see what net neutrality has meant so far and how it has impacted customers and ISPs.

ISP Investments. Probably the biggest threat we heard from the ISPs is that the net neutrality order would squelch investment in broadband. But it’s hard to see that it’s done so. It’s been clear for years that AT&T and Verizon are looking for ways to walk away from the more costly parts of their copper networks. But Verizon is now building FiOS in Boston after many years of no new fiber construction. And while few believe that AT&T is spending as much money on fiber as they are claiming, they are telling the world that they will be building a lot more fiber. And other large ISPs like CenturyLink are building new fiber at a breakneck pace.

We also see all of the big cable companies talking about their upgrades to DOCSIS 3.1. Earlier this year the CEO of Comcast was asked at the INTX show in Boston where the company had curtailed capital spending and he couldn’t cite an example. Finally, I see small telcos and coops building as much fiber as they can get funded all over the country. So it doesn’t seem like net neutrality has had any negative impact on fiber investments.

Privacy. The FCC has started to pull the ISPs under the same privacy rules for broadband that have been in place for telephone for years. The ISPs obviously don’t like this, but consumers seem to be largely in favor of requiring an ISP to ask for permission before marketing to you or selling your information to others.

The FCC is also now looking at restricting the ways that ISPs can use the data gathered from customers from web activity for marketing purposes.

Data Caps. The FCC has not explicitly made any rulings against data caps, but they’ve made it clear that they don’t like them. This threat (along with a flood of consumer complaints at the FCC) seems to have been enough to get Comcast to raise its data caps from 300 GB per month to 1 TB. It appears that AT&T is now enforcing its data caps and we’ll have to see if the FCC is going to use Title II authority to control the practice. It will be really interesting if the FCC tackles wireless data caps. It has to an embarrassment for them that the wireless carriers have been able to sell some of the most expensive broadband in the world under their watch.

Content Bundling and Restrictions. Just as the net neutrality rules were passed there were all sorts of rumors of ISPs making deals with companies like Facebook to bundle their content with broadband in ways that would have given those companies priority access to customers. That practice quickly disappeared from the landline broadband business, but there are still several cases of providers using zero-rating to give their own content priority over other content. My guess is that this court ruling is going to give the FCC the justification to go after such practices.

It’s almost certain that the big ISPs will appeal this ruling to the Supreme Court. But an appeal of a positive appeal ruling is a hard thing to win and the Supreme Court would have to decide that the appeals court of Washington DC made a major error in its findings before they would even accept the case, let alone overturn the ruling. I think the court victory gives the FCC the go-ahead to fully implement the net neutrality order.

 

A New Cable Network Architecture

coaxial cableThere seems to be constant press about the big benefits that are going to come when cable coaxial networks upgrade to DOCSIS 3.1. Assuming a network can meet all of the requirements for a DOCSIS 3.1 upgrade the technology is promising to allow gigabit download speeds for cable networks and provide cable companies a way to fight back against fiber networks. But the DOCSIS 3.1 upgrade is not the only technological path that can increase bandwidth on cable networks.

All of the techniques that can increase speeds have one thing in common – the network operator needs to have first freed up channels on the cable system. This is the primary reason that cable systems have converted to digital – so that they could create empty channel slots on the network that can be used for broadband instead of TV.

The newest technology that offers an alternative to DOCSIS 3.1 is being called Distributed Access Architecture (DAA). This solution moves some or all of the broadband electronics from the core headend into the field. In a traditional DOCSIS cable network the broadband paths are generated to customers using a device called a CMTS (cable modem termination system) at the core. This is basically a router that puts broadband onto the cable network and communicates with the cable modems.

In the most extreme versions of DAA the large CMTS in the headend would be replaced by numerous small neighborhood CMTS units dispersed throughout the network. In the less extreme version of DAA there would be smaller number of CMTS units placed at existing neighborhood nodes. Both versions provide for improved broadband in the network. For example, in the traditional HFC network a large CMTS might be used to feed broadband to tens of thousands of customers. But dispersing smaller CMTS units throughout the network would result in a network where fewer customers are sharing bandwidth. In fact, if the field CMTS units can be made small enough and cheap enough a cable network could start to resemble a fiber PON network that typically shares bandwidth with up to 32 customers.

There are several major advantages to the DAA approach. First, moving the CMTS into the field carries the digital signal much deeper into the network before it gets converted to analog. This reduces interference which strengthens the signal and improves quality. And sending digital signals deeper into the network allows support for higher QAM, which is the signaling protocol used to squeeze more bits per hertz into the network. Finally, the upgrade to DAA is the first step towards migrating to an all-digital network – something that is the end game for every large cable company.

There is going to be an interesting battle between fans of DOCSIS 3.1 and those that prefer the DAA architecture. DOCSIS 3.1 was created by CableLabs, and the large cable companies who jointly fund CableLabs tend to follow their advice on an upgrade path. Today DOCSIS 3.1 is still in first generation deployment and is just starting to be field tested and there is already a backlog on ordering DOCSIS 3.1 core routers. This opens the door for the half dozen vendors that have developed a DAA solution as an alternative.

While CableLabs didn’t invent DAA, they have blessed three different variations of network design for the technology. The technology has already been trialed in Europe and the Far East and is now becoming available in the US. It’s been rumored that at least one large US cable company is running a trial of the equipment, but there doesn’t seem to be any press on this.

Cable networks are interesting in that you can devise a number of different migration paths to get to an all-digital network. But in this industry the path that is chosen by the largest cable companies tends to become the de facto standard for everybody else. As the large companies buy a given solution the hardware costs drop and the bugs are worked out. As attractive as DAA is, I suspect that as Comcast and others choose the DOCSIS 3.1 path that it will become the path of choice for most cable companies.

Are We Really Funding More DSL?

DSL modemRecently while speaking at the National Association of Regulatory Utility Commissioners (NARUC), AT&T CEO Randall Stephenson told the attendees that AT&T’s DSL technology is obsolete. This is a rare admission of the truth from AT&T, which has been less than forthcoming over the years about its broadband business.

And it’s a pretty interesting quote from a company that last year accepted $427 million in CAF II funding from the FCC to expand broadband in rural markets. That money is supposedly going to be used to upgrade rural customers to be able to receive at least 10 Mbps download and 1 Mbps upload speeds. CenturyLink and Frontier plan to spend their federal assistance money by expanding DSL. I think it’s widely assumed that AT&T will also use the money for DS. But we can’t be certain that they aren’t planning to instead use that money to bring cellular wireless to rural homes, against the intentions of the FCC.

To be fair to Stephenson, his response was answering a question about how regulators should look at new technology cycles. Stephenson pointed out that technology cycles have shortened over the years. When DSL was first introduced it was expected to be good for about 10 – 15 years, but today the cycles for new technology have shortened to 5 years – with his example being the transition between 3G and 4G wireless.

Stephenson is right about the speed at which broadband technologies are improving. Since the introduction of DSL we have seen cable modems go through several generations of improvements and in 2016 we are seeing the first widespread roll-out of DOCSIS 3.1 and gigabit speeds from cable companies. And in that same time frame we have seen the development and the maturation of fiber technologies for serving homes. From a performance perspective DSL has been left in the dust.

AT&T certainly still has a lot of DSL in service. But it’s hard to decipher AT&T’s broadband statistics because they lump all broadband customers together. This has gotten more confusing since they picked up DirecTV, which sells satellite broadband. AT&T has been further making a distinction between traditional DSL customers and U-verse customers, most of which are served by bonding two pairs of copper together and using two DSL circuits. But supposedly within the U-verse numbers are also customers on fiber, which many analysts suspect are MDUs or small greenfield fiber trials that AT&T has done over the years.

In the fourth quarter of 2015 AT&T announced a net gain of 192,000 IP broadband customers, which is a mix of the three different types of broadband customers. If AT&T is like Verizon and CenturyLink they have been losing traditional DSL customers at a torrid pace, so it’s hard to know what to make of that number. Are they finally adding some FTTP customers?

But back to DSL. Stephenson is right. At best, a DSL service on a single copper line can deliver perhaps 20 Mbps of data – but conditions are rarely ideal and in the real world DSL is generally a lot slower than that. But even if people could get 20 Mbps from new DSL it’s obsolete because that is no longer considered as broadband.

It’s a shame that the FCC is going to invest billions in DSL at a time when the large telcos were never going to make those investments on their own. The CAF II funds will channel billions of dollars to the DSL vendors for one last hurrah before the technology hits the dust heap. Without the CAF II money one can imagine the DSL equipment market fading away.

While CAF II is a huge gift to the companies that sell DSL equipment – it’s going to be a long-term curse to people that will be upgraded with CAF II funding. They are going to get upgraded to DSL in a fiber world and the telcos are going to check these areas off as upgraded and needing no more investment. A lot of the first DSL built in the 90s is still working in the network, and sadly we are probably going to find a lot of CAF II DSL still working in rural America twenty years from now.

What I am Thankful for in 2015

ThanksgivingThanksgiving is upon us yet again and I’ve given some thought to those things in the industry and beyond that I am thankful for this year.

Net Neutrality: I am thankful for the net neutrality ruling, more as a consumer than as someone in the industry. From what I can see the largest ISPs and cellular companies had a big bag of nasty tricks waiting for all of us had it not passed. I feel like this ruling took back some of the power in the industry from the ISPs with the FCC as our watchdog. Now we need to wait a while more to see if the courts uphold the FCC. On the other hand, I am not so glad that net neutrality seems to have taken the Federal Trade Commission out of the picture for telecom. They were starting to take a hard look at monopoly abuses and one can hope the FCC will take up where they left off. There is some reassurance that the FTC says they will still play a role, but that role is clearly diminished.

Municipal Competition: I was glad to see the FCC tackle the prohibitions against municipal telecom. As somebody who works mostly with rural broadband issues, we need to encourage anybody, including cities and counties that are willing to tackle bringing broadband to rural places. I can understand why the large ISPs don’t want competition from municipal entities in big cities, but that still has not happened anywhere larger than Chattanooga and probably won’t. I have a harder time seeing why the large ISPs fight so vigorously against competition in rural areas where they don’t spend any capital to maintain their networks. These smaller communities are waking up to the fact that if they don’t take care of the broadband gap themselves that nobody else is likely to do so.

Inching Towards More Privacy: In this last year it became apparent to everybody that the NSA and a ton of commercial companies are spying on all of us. I love the parts of the industry that are taking the side of privacy. There’s Apple that is encrypting everything in a way that even they can’t decrypt. There’s a number of companies working on block chains and other forms of peer-to-peer communication that ought to be immune from snooping. And there are a number of web sites that now promise they aren’t tracking you. We have a long way to go, but it looks like people are starting to care about their privacy.

DOCSIS 3.1: I am thankful for technologies that are making broadband faster. The DOCSIS 3.1 technology that the cable companies are starting to implement will probably help the largest number of Americans get faster broadband. Several of the big cable companies are promising that they will offer faster speeds across the board. I think cable companies have finally awakened to the fact that it doesn’t cost them that much to give out more speed and it shelters them from the competition. And there s a slow but steady growth of fiber with companies like Google and CenturyLink leading the way. You will hear me whoopin’ and hollerin’ in this blog if somebody brings an affordable gigabit to my neighborhood.

Technology is Getting Better: The speed at which technology in and near the industry is improving is mind boggling. It seems like I hear about something new almost every day. This year saw 10 gigabit fiber terminals that are cheap enough for home and small business use. We’ve seen a plethora of improvements in OTT boxes like Roku and the gaming systems. 4K video has made it into the mainstream conversation in the last year. The speed and processing power of cellphones has literally doubled in the last year.

And My Readers: This marks my third Thanksgiving with this blog and I don’t seem to be running out of topics. I am truly thankful that people read this from time to time. I started writing this blog as a way to force myself to stay up with current events in the industry and it has done that in spades. I seem to learn something new every day, and for that I am most thankful.

Finally, Speed Competition

cheetah-993774We are at the beginning of a big change in urban Internet speeds. Recently, there have been all sorts of announcements about companies upgrading speeds or wanting to build fiber in major markets.

For instance, Comcast says that they are going to upgrade all of their systems to DOCSIS 3.1 within about two years. This new CableLabs standard is going to allow them to offer far faster speeds to their customers. DOCSIS 3.1 allows a cable system to bond together empty channels to make one large data pipe and theoretically, if the networks were empty of television channels, they could offer download speeds up to 10 Gbps. But since there are still lots of cable channels on these network the more realistic maximum speeds for now will be a gigabit or maybe less depending upon the spare channels available in any given system.

Comcast has already started the process of upgrading customer speeds. For example, in much of the northeast they have upgraded customers from 25 Mbps to 75 Mbps and from 105 Mbps to 150 Mbps. They’ve announced that these same upgrades will be done in all of their systems. They’ve said in future years there will be more upgrades to go even faster.

Other cable companies are likely to follow suit. MediaCom has already made gigabit announcements. Time Warner in Austin also greatly increased speeds. Cox has announced aggressive plans for speeds. It’s likely almost all urban cable systems will be upgraded to DOCSIS 3.1 within a few years.

Meanwhile, CenturyLink has been starting the process of building fiber in most of their larger markets. It looks like they are building fiber in cities like Seattle, Portland, Minneapolis, Phoenix, Denver, Salt Lake City, and a number of other markets. They will offer speeds that vary from 40 Mbps for $30 to gigabit speeds for $80 as part of bundled packages. CenturyLink is also experimenting right now in Salt Lake City with G.Fast, testing a 100 Mbps product over copper. Between the two products the company thinks they will be able to offer faster speeds to a lot of urban and suburban customers.

And of course, Google has been rolling out fiber and can be credited with popularizing the concept of gigabit fiber. They have built or are launching in Kansas City, Austin, Atlanta, Provo, Salt Lake City, Nashville, Raleigh-Durham and now San Antonio. They have released a long list of other cities where they may go next.

Finally, there are numerous smaller companies and municipalities that are already building fiber or who have plans to build fiber.

Comcast’s new philosophy is a 180 degree turnabout from a few years ago when they said that customers didn’t need bandwidth and that they would give customers only what Comcast thought they needed. It seems now that Comcast is adopting the philosophy of unilaterally increasing speeds, even in markets where they might not have an immediate competitor on the horizon. They already have the customers and they already have the networks and they can take the wind out of the sales of a potential fiber competitor if customers in any given markets already have fast speeds at an affordable price.

I think Comcast and the other companies are smart to do this. The higher-priced data products are probably the highest margin products we have ever had in this industry. It doesn’t cost a whole lot more than a few dollars to buy the raw bandwidth needed to serve a data customer and it’s widely believed that for large companies the margins are in the 80% to 90% range. It’s a wise decision to protect these customers, and by being proactive with speeds the cable companies will make it a lot harder for other companies to take their customers. And I think they have finally begun to learn the little secret that many have already figured out – faster speeds don’t really hurt profitability and a customers with a 100 Mbps connection doesn’t use much more data than one with a 20 Mbps connection, they just download things faster.

So what we are seeing now is competition through speed rather than competition through pricing. All of the comparisons I have ever seen show that US broadband prices are significantly higher than in any other developed countries. When Google or CenturyLink enters a market with $70 to $80 gigabit they are not lowering prices, and are actually luring customers to pay more than today. It’s an interesting market when even in the most competitive markets the prices don’t really come down.

Can Cable Networks Deliver a Gigabit?

coax cablesTime Warner Cable recently promised the Los Angeles City Council that they could bring gigabit service to the city by 2016. This raises the question – can today’s cable networks deliver a gigabit?

The short answer is yes, they are soon going to be able to do that, but with a whole list of caveats. So let me look at the various issues involved:

  • DOCSIS 3.1: First, a cable company has to upgrade to DOCSIS 3.1. This is the latest technology from CableLabs that lets cable companies bond multiple channels together in a cable system to be able to deliver faster data speeds. This technology is just now hitting the market and so by next year cable companies are going to be able to have this implemented and tested.
  • Spare Channels: To get gigabit speeds, a cable system is going to need at least 20 empty channels on their network. Cable companies for years have been making digital upgrades in order to cram more channels into the existing channel slots. But they also have continued demands to carry more channels which then eats up channel slots. Further, they are looking at possibly having to carry some channels of 4K programming, which is a huge bandwidth eater. For networks without many spare channels it can be quite costly to free up this much empty space on the network. But many networks will have this many channels available now or in the near future.
  • New Cable Modems: DOCSIS 3.1 requires a new, and relatively expensive cable modem. Because of this a cable company is going to want to keep existing data customers where they are on the system and use the new swath of bandwidth selectively for the new gigabit customers.
  • Guaranteed versus Best Effort: If a cable company wants to guarantee gigabit speeds then they are not going to be able to have too many gigabit customers at a given node. This means that as the number of gigabit customers grows they will have to ‘split’ nodes, which often means building more fiber to feed the nodes plus an electronics upgrade. In systems with large nodes this might be the most expensive part of the upgrade to gigabit. The alternative to this is to have a best-effort product that only is capable of a gigabit at 3:00 in the morning when the network has no other traffic.
  • Bandwidth to the Nodes: Not all cable companies are going to have enough existing bandwidth between the headend and the nodes to incorporate an additional gigabit of data. That will mean an upgrade of the node transport electronics.

So the answer is that Time Warner will be capable of delivering a gigabit next year as long as they upgrade to DOCSIS 3.1, have enough spare channels, and as long as they don’t sell too many gigabit customers and end up needing massive node upgrades.

And that is probably the key point about cable networks and gigabit. Cable networks were designed to provide shared data among many homes at the same time. This is why cable networks have been infamous for slowing down at peak demand times when the number of homes using data is high. And that’s why they have always sold their speeds as ‘up to’ a listed number. It’s incredibly hard for them to guarantee a speed.

When you contrast this to fiber, it’s relatively easy for somebody like Google to guarantee a gigabit (or any other speed). Their fiber networks share data among a relatively small number of households and they are able to engineer to be able to meet the peak speeds.

Cable companies will certainly be able to deliver a gigabit speed. But I find it unlikely for a while that they are going to price it at $70 like Google or that they are going to try to push it to very many homes. There are very few, if any, cable networks that are ready to upgrade all or even most of their customers to gigabit speeds. There are too many chokepoints in their networks that can not handle that much bandwidth.

But as long as a cable network meets the base criteria I discussed they can sell some gigabit without too much strain. Expect them to price gigabit bandwidth high enough that they don’t get more than 5%, or some similar penetration of customers on the high bandwidth product. There are other network changes coming that will make this easier. I just talked last week about a new technology that will move the CMTS to the nodes, something that will make it easier to offer large bandwidth. This also gets easier as cable systems move closer to offering IPTV, or at least to finding ways to be more efficient with television bandwidth.

Finally, there is always the Comcast solution. Comcast today is selling a 2 gigabit connection that is delivered over fiber. It’s priced at $300 per month and is only available to customers who live very close to an existing Comcast fiber. Having this product allows Comcast to advertise as a gigabit company, even though this falls into the category of ‘press release’ product rather than something that very many homes will ever decide to buy. We’ll have to wait and see if Time Warner is going to make gigabit affordable and widely available. I’m sure that is what the Los Angeles City Council thinks they heard, but I seriously doubt that is what Time Warner meant.