The Next Big Fiber Upgrade

CableLabs recently wrote a blog announcing the release of the specifications for CPON (Coherent Passive Optical Networks), a new fiber technology that can deliver 100 gigabits of bandwidth to home and business nodes. The CPON specification working group that developed the new specification includes seventeen optical electronics vendors, fourteen fiber network operators, CableLabs, and SCTE (Society for Cable Telecommunications Engineers). For those interested, a link to the new specifications can be downloaded here.

The blog notes the evolution of PON from the first BPON technology that delivered 622 Mbps to today’s PON that can deliver 10 gigabits. The blog notes that current PON technology relies on Intensity-Modulation Direct-Detect (IM-DD) technology that will reach its speed limitations at about 25 gigabits.

The CPON specification instead relies on coherent optical technology, which is the basis for today’s backbone fiber networks that are delivering speeds up to 400 Gbps. The specification calls for delivering the higher bandwidth using a single wavelength of light, which is far more efficient and less complicated than a last-mile technology like NG-PON2 that balances multiple wavelengths on the customer path. This specification is the first step towards adapting our long-haul technology to serve multiple locations in a last-mile network.

There are a few aspects of the specification that ISPs are going to like.

  • The goal is to create CPON as an overlay that will coexist with existing PON technology. That will allow a CPON network to reside alongside an existing PON network and not require a flash cut to the new technology.
  • CPON will increase the effective reach of a PON network from 12 miles today to 50 miles. This would allow an ONT placed in a hut in a city to reach customers well into the surrounding rural areas.
  • CPON will allow up to 512 customers to share a neighborhood node. That means more densely packed OLT cards that will need less power and cooling. On the downside, that also means that a lot of customers can be knocked out of service with a card failure.

The blog touts the many benefits of having-100 gigabit broadband speeds in the last-mile. CPON will be able to support applications like high-resolution interactive video, augmented reality, virtual reality, mixed reality, the metaverse, smart cities, and pervasive communications.

One of the things not mentioned by the blog is that last-mile fiber technology is advancing far faster than the technology of the devices used in the last mile. There aren’t a lot of devices in our homes and businesses today that can fully digest a 10-gigabit data pipe, and stretching to faster speeds means developing a new generation of chips for user devices. Releasing specifications like this one puts chipmakers on alert to begin contemplating those faster chips and devices.

There will be skeptics who will say that we don’t need technology at these faster speeds. But in only twenty years, we’ve gone from broadband delivered by dial-up to bandwidth delivered by 10-gigabit technology. None of these skeptics can envision the uses for broadband that can be enabled over the next twenty years by newer technologies like CPON. If there is any lesson we’ve learned from the computer age, it’s that we always find a way to use faster technology within a short time after it’s developed.

Predicting Broadband Usage on Networks

One of the hardest jobs these days is being a network engineer who is trying to design networks to accommodate future broadband usage. We’ve known for years that the amount of data used by households has been doubling every three years – but predicting broadband usage is never that simple.

Consider the recent news from OpenSource, a company that monitors usage on wireless networks. They report a significant shift in WiFi usage by cellular customers. Over the last year AT&T and Verizon have introduced ‘unlimited’ cellular plans and T-Mobile has pushed their own unlimited plans harder in response. While the AT&T and Verizon plans are not really unlimited and have caps a little larger than 20 GB per month, the introduction of the plans has changed the mindset of numerous users who no longer automatically seek WiFi networks.

In the last year the percentage of WiFi usage on the Verizon network fell from 54% to 51%; on AT&T from 52% to 49%, and on T-Mobile from 42% to 41%. Those might not sound like major shifts, but for the Verizon network it means that the cellular network saw an unexpected additional 6% growth in data volumes in one year over what the company might normally have expected. For a network engineer trying to make sure that all parts of the network are robust enough to handle the traffic this is a huge change and means that chokepoints in the network will appear a lot sooner than expected. In this case the change to unlimited plans is something that was cooked-up by marketing folks and it’s unlikely that the network engineers knew about it any sooner than anybody else.

I’ve seen the same thing happen with fiber networks. I have a client who built one of the first fiber-to-the-home networks and use BPON, the first generation of electronics. The network was delivering broadband speeds of between 25 Mbps and 60 Mbps, with most customers in the range of 40 Mbps.

Last year the company started upgrading nodes to the newer GPON technology, which upped the potential customer speeds on the network to 1 gigabit. The company introduced both a 100 Mbps product and a gigabit product, but very few customers immediately upgraded. The upgrade meant changing the electronics at the customer location, but also involved a big boost in the size of the data pipes between neighborhood nodes and the hub.

The company was shocked to see data usage in the nodes immediately spike upward between 25% and 40%. After all, they had not arbitrarily increased customer speeds across-the-board, but had just changed the technology in the background. For the most part customers had no idea they had been upgraded – so the spike can’t be contributed to a change in customer behavior like what happened to the cellular companies after introducing unlimited data plans.

However, I suspect that MUCH of the increased speeds still came from changed customer behavior. While customers were not notified that the network had been upgraded, I’m sure that many customers noticed the change. The biggest trend we see in household broadband demand over the last two years is the desire by households to utilize multiple big data streams at the same time. Before the upgrades households were likely restricting their usage by not allowing kids to game or do other large bandwidth activities while the household was video streaming or doing work. After the upgrade they probably found they no longer had to self-monitor and restrict usage.

In addition to this likely change in customer behavior the spikes in traffic also were likely due to correcting bottlenecks in the older fiber network that the company had never recognized or understood. I know that there is a general impression in the industry that fiber networks don’t see the same kind of bottlenecks that we expect in cable networks. In the case of this network, a speed test on any given customer generally showed a connection to the hub at the speeds that customers were purchasing – and so the network engineers assumed that everything was okay. There were a few complaints from customers that their speeds bogged down in the evenings, but such calls were sporadic and not widespread.

The company decided to make the upgrade because the old electronics were no longer supported by the vendor and they also wanted to offer faster speeds to increase revenues. They were shocked to find that the old network had been choking customer usage. This change really shook the engineers at the company and they feared that the broadband growth curve was going to now be at the faster rate. Luckily, within a few months each node settled back down to the historic growth rates. However, the company found itself instantly with network usage they hadn’t expected for at least another year, making them that much closer to the next upgrade.

It’s hard for a local network owner to predict the changes they are going to affect the network utilization. For example, they can’t predict that Netflix will start pushing 4K video. They can’t know that the local schools will start giving homework that involves watching a lot of videos at home. Even though we all understand the overall growth curve for broadband usage, it doesn’t grow in a straight line and there are periods of faster and slower growth along the curve. It’s enough to cause network engineers to go gray a little sooner than expected!

What’s the Real Cost of Providing the Internet?

British-Union-Jack-FlagThere is an interesting conversation happening in England about the true cost of operating the Internet. As an island nation, all of the costs of operating the network must be borne by the whole country, and so every part of the Internet cost chain is being recognized and counted as a cost. That’s very different than the way we do it here.

There are two issues that are concerning British officials – power costs and network capacity. Reports are that operating the data centers and the electronics hubs needed to operate the Internet now consume 8% of all of the power produced in the country. And it’s growing rapidly. At the current rate of growth of Internet consumption it’s estimated that the power requirements needed for the Internet are doubling every four years.

Here in the US we don’t have as much of the same concern about power costs. First, we have hundreds of different power companies scattered across the country and we don’t produce electricity in the same places that we use the Internet. But second, in this country the large data centers are operated by the large billion-dollar companies like Amazon, Google, and Facebook who can afford to pay the electric bills, mostly due to advertising revenues. But in a country like England, that sort of drain on electricity capacity must be borne by all electric rate payers when the whole grid hits capacity and must somehow be upgraded.

And it’s going to get a lot worse. If the pace of power consumption needed for broadband doesn’t somehow slow down, then by 2035 the Internet will be using all of the power produced in the British Isles today. It’s not likely that the power needs will grow quite that fast. For example, there are far more power-efficient routers and switches being made for data centers that are going to knock the power demand curve down a notch, but there is no reason to think that the demand for Internet usage is going to stop growing anytime soon.

In Britain they are also worried about the cost of maintaining the network. They say that the bulk of their electronics need to be upgraded in the next few years. In the industry we always talk about fiber being a really long-term investment, and the fiber is so good today that we really don’t know how long it’s going to last – 50 years, 75 years, longer? But that is not true for the electronics. Those electronics have to be replaced every 7 to 10 years and that can be expensive.

In this country all of the companies and cities that were early adopters of FTTP technology used BPON – the first Fiber-to-the-premise technology. This technology was the best thing at the time and was far faster than cable modems – but that is no longer the case. BPON is limited in two major ways. First, as happens with many technologies, the manufacturers all stopped supporting BPON. That means it’s hard to buy replacement parts and a BPON network is at major risk of failure if one of the larger core components of the network dies.

BPON is also different enough from newer technologies that the new replacements, like GPON, are not backwards compatible. This means that in order to upgrade to a newer version of fiber technology every electronic component in the network from the core to the ONTs on customer premises must be replaced, making upgrades very costly. Even the way BPON is strung to homes is different, meaning that there is fiber field work needed to upgrade it. We have hopefully gotten smarter lately; a lot of fiber electronics today are being designed to still work with later generations of equipment.

This is what happened in England. The country’s telecoms were early adopters of fiber and so the electronics throughout the country are already aged and running out of capacity. I saw a British article where the author was worried that the networks were getting ‘full’ and that more fiber would have to be built. The author didn’t recognize that upgrading electronics instead can use existing fiber to deliver a lot more data.

England is one of the wealthier nations on the global scale and one has to be concerned about how the poorer parts of the world are going to deal with these issues. As we introduce the Internet into Africa and other poorer nations one has to ask how a poor country that already has trouble generating enough electricity is going to be able to handle the demand caused by the Internet? And how will poorer nations keep up with the constant upgrades needed to keep the networks operating?

Perhaps I am worrying about nothing and maybe we will finally see the cheap fusion reactors that have been just over the horizon since I was a teenager. But when a country like England talks about the possible need to ration Internet usage, or to somehow meter it so that big users pay a lot more, one has to be concerned. In our country the big ISPs always complain about profits, but they are wildly profitable. The US and a few other nations are very spoiled and we can take the continued growth of the Internet for granted. Much of the rest of the world, however, is going to have a terrible time keeping up, and that is not good for mankind as a whole.