A Strategy for Upgrading GPON

I’ve been asked a lot during 2018 if fiber overbuilders ought to be considering the next generation of PON technology that might replace GPON. They hear about the newer technologies from vendors and the press. For example, Verizon announced a few months ago that they would begin introducing Calix NGPON2 into their fiber network next year. The company did a test using the technology recently in Tampa and achieved 8 Gbps speeds. AT&T has been evaluating the other alternate technology, XGS-PON, and may be introducing it into their network in 2019.

Before anybody invests a lot of money in a GPON network it’s a good idea to always ask if there are better alternatives – as should be done for every technology deployed in the network.

One thing to consider is how Verizon plans on using NGPON2. They view this as the least expensive way to deliver bandwidth to a 5G network that consists of multiple small cells mounted on poles. They like PON technology because it accommodates multiple end-points using a single last-mile fiber, meaning a less fiber-rich network than with other 10-gigabit technologies. Verizon also recently began the huge task of consolidating their numerous networks and PON gives them a way to consolidate multi-gigabit connections of all sorts onto a single platform.

Very few of my clients operate networks that have a huge number of 10-gigabit local end points. Anybody that does should consider Verizon’s decision because NGPON2 is an interesting and elegant solution for handling multiple large customer nodes while also reducing the quantity of lit fibers in the network.

Most clients I work with operate PON networks to serve a mix of residential and business customers. The first question I always ask them is if a new technology will solve an existing problem in their network. Is there anything that a new technology can do that GPON can’t do? Are my clients seeing congestion in neighborhood nodes that are overwhelming their GPON network?

Occasionally I’ve been told that they want to provide faster connections to a handful of customers for which the PON network is not sufficient – they might want to offer dedicated gigabit or larger connections to large businesses, cell sites or schools. We’ve always recommended that clients design networks with the capability of large Ethernet connections external to the PON network. There are numerous affordable technologies for delivering a 10-gigabit pipe directly to a customer with active Ethernet. It seems like overkill to consider upgrading the electronics to all customers to satisfy the need of a few large customers rather than overlaying a second technology into the network. We’ve always recommended that networks have some extra fiber pairs in every neighborhood exactly for this purpose.

I’ve not yet heard an ISP tell me that they are overloading a residential PON network due to customer data volumes. This is not surprising. GPON was introduced just over a decade ago, and at that time the big ISPs offered speeds in the range of 25 Mbps to customers. GPON delivers 2.4 gigabits to up to 32 homes and can easily support residential gigabit service. At the time of introduction GPON was at least a forty-times increase in customer capacity compared to DSL and cable modems – a gigantic leap forward in capability. It takes a long time for consumer household usage to grow to fill that much new capacity. The next biggest leap forward we’ve seen was the leap from dial-up to 1 Mbps DSL – a 17-times increase in capacity.

Even if somebody starts reaching capacity on a GPON there are some inexpensive upgrades that are far less expensive than upgrading to a new technology. A GPON network won’t reach capacity evenly and would see it in some neighborhood nodes first. The capacity in a neighborhood GPON node can easily be doubled by cutting the size of the node in half by splitting it to two PONs. I have one client that did the math and said that as long as they can buy GPON equipment they would upgrade by splitting a few times – from 32 to 16 homes and from 16 homes to 8 homes, and maybe even from 8 to 4 customers before they’d consider tearing out GPON for something new. Each such split doubles capacity and splitting nodes three times would be an 8-fold increase in capacity. If we continue on the path of seeing household bandwidth demand double every three years, then splitting nods twice would easily add more than another decade to the life of a PON network. In doing that math it’s important to understand that splitting a node actually more than doubles capacity because it also decreases the oversubscription factor for each customer on the node.

AT CCG we’ve always prided ourselves on being technology neutral and vendor neutral. We think network providers should use the technology that most affordably fits the needs of their end users. We rarely see a residential fiber network where GPON is not the clear winner from a cost and performance perspective. We have clients using numerous active Ethernet technologies that are aimed at serving large businesses or for long-haul transport. But we are always open-minded and would easily recommend NGPON2 or XGS-PON if it is the best solution. We just have not yet seen a network where the new technology is the clear winner.

Predicting Broadband Usage on Networks

One of the hardest jobs these days is being a network engineer who is trying to design networks to accommodate future broadband usage. We’ve known for years that the amount of data used by households has been doubling every three years – but predicting broadband usage is never that simple.

Consider the recent news from OpenSource, a company that monitors usage on wireless networks. They report a significant shift in WiFi usage by cellular customers. Over the last year AT&T and Verizon have introduced ‘unlimited’ cellular plans and T-Mobile has pushed their own unlimited plans harder in response. While the AT&T and Verizon plans are not really unlimited and have caps a little larger than 20 GB per month, the introduction of the plans has changed the mindset of numerous users who no longer automatically seek WiFi networks.

In the last year the percentage of WiFi usage on the Verizon network fell from 54% to 51%; on AT&T from 52% to 49%, and on T-Mobile from 42% to 41%. Those might not sound like major shifts, but for the Verizon network it means that the cellular network saw an unexpected additional 6% growth in data volumes in one year over what the company might normally have expected. For a network engineer trying to make sure that all parts of the network are robust enough to handle the traffic this is a huge change and means that chokepoints in the network will appear a lot sooner than expected. In this case the change to unlimited plans is something that was cooked-up by marketing folks and it’s unlikely that the network engineers knew about it any sooner than anybody else.

I’ve seen the same thing happen with fiber networks. I have a client who built one of the first fiber-to-the-home networks and use BPON, the first generation of electronics. The network was delivering broadband speeds of between 25 Mbps and 60 Mbps, with most customers in the range of 40 Mbps.

Last year the company started upgrading nodes to the newer GPON technology, which upped the potential customer speeds on the network to 1 gigabit. The company introduced both a 100 Mbps product and a gigabit product, but very few customers immediately upgraded. The upgrade meant changing the electronics at the customer location, but also involved a big boost in the size of the data pipes between neighborhood nodes and the hub.

The company was shocked to see data usage in the nodes immediately spike upward between 25% and 40%. After all, they had not arbitrarily increased customer speeds across-the-board, but had just changed the technology in the background. For the most part customers had no idea they had been upgraded – so the spike can’t be contributed to a change in customer behavior like what happened to the cellular companies after introducing unlimited data plans.

However, I suspect that MUCH of the increased speeds still came from changed customer behavior. While customers were not notified that the network had been upgraded, I’m sure that many customers noticed the change. The biggest trend we see in household broadband demand over the last two years is the desire by households to utilize multiple big data streams at the same time. Before the upgrades households were likely restricting their usage by not allowing kids to game or do other large bandwidth activities while the household was video streaming or doing work. After the upgrade they probably found they no longer had to self-monitor and restrict usage.

In addition to this likely change in customer behavior the spikes in traffic also were likely due to correcting bottlenecks in the older fiber network that the company had never recognized or understood. I know that there is a general impression in the industry that fiber networks don’t see the same kind of bottlenecks that we expect in cable networks. In the case of this network, a speed test on any given customer generally showed a connection to the hub at the speeds that customers were purchasing – and so the network engineers assumed that everything was okay. There were a few complaints from customers that their speeds bogged down in the evenings, but such calls were sporadic and not widespread.

The company decided to make the upgrade because the old electronics were no longer supported by the vendor and they also wanted to offer faster speeds to increase revenues. They were shocked to find that the old network had been choking customer usage. This change really shook the engineers at the company and they feared that the broadband growth curve was going to now be at the faster rate. Luckily, within a few months each node settled back down to the historic growth rates. However, the company found itself instantly with network usage they hadn’t expected for at least another year, making them that much closer to the next upgrade.

It’s hard for a local network owner to predict the changes they are going to affect the network utilization. For example, they can’t predict that Netflix will start pushing 4K video. They can’t know that the local schools will start giving homework that involves watching a lot of videos at home. Even though we all understand the overall growth curve for broadband usage, it doesn’t grow in a straight line and there are periods of faster and slower growth along the curve. It’s enough to cause network engineers to go gray a little sooner than expected!

What’s the Next FTTP Technology?

There is a lot of debate within the industry about the direction of the next generation of last mile fiber technology. There are three possible technologies that might be adopted as the preferred next generation of electronics – NG-PON2, XGS-PON or active Ethernet. All of these technologies are capable of delivering 10 Gbps streams to customers.

Everybody agrees that the current widely deployed GPON is starting to get a little frayed around the edges. That technology delivers 2.4 Gbps downstream and 1 Gbps upstream for up to 32 customers, although most networks I work with are configured to serve 16 customers at most. All the engineers I talk to think this is still adequate technology for residential customers and I’ve never heard of a neighborhood PON being maxed out for bandwidth. But many ISPs already use something different for larger business customers that demand more bandwidth than a PON can deliver.

The GPON technology is over a decade old, which generally is a signal to the industry to look for the next generation replacement. This pressure usually starts with vendors who want to make money pushing the latest and greatest new technology – and this time it’s no different. But after taking all of the vendor hype out of the equation it’s always been the case that any new technology is only going to be accepted once that new technology achieves and industry-wide economy of scale. And that almost always means being accepted by at least one large ISP. There are a few exceptions to this, like what happened with the first generation of telephone smart switches that found success with small telcos and CLECs first – but most technologies go nowhere until a vendor is able to mass manufacture units to get the costs down.

The most talked about technology is NG-PON2 (next generation passive optical network). This technology works by having tunable lasers that can function at several different light frequencies. This would allow more than one PON to be transmitted simultaneously over the same fiber, but at different wavelengths. But that makes this a complex technology and the key issue is if this can ever be manufactured at price points that can match other alternatives.

The only major proponent of NG-PON2 today is Verizon which recently did a field trial to test the interoperability of several different vendors including Adtran, Calix, Broadcom, Cortina Access and Ericsson. Verizon seems to be touting the technology, but there is some doubt if they alone can drag the rest of the industry along. Verizon seems enamored with the idea of using the technology to provide bandwidth for the small cell sites needed for a 5G network. But the company is not building much new residential fiber. They announced they would be building a broadband network in Boston, which would be their first new construction in years, but there is speculation that a lot of that deployment will use wireless 60 GHz radios instead of fiber for the last mile.

The big question is if Verizon can create an economy of scale to get prices down for NG-PON2. The whole industry agrees that NG-PON2 is the best technical solution because it can deliver 40 Gbps to a PON while also allowing for great flexibility in assigning different customers to different wavelengths. But the best technological solution is not always the winning solution and the concern for most of the industry is cost. Today the early NG-PON2 electronics is being priced at 3 – 4 times the cost of GPON, due in part to the complexity of the technology, but also due to the lack of economy of scale without any major purchaser of the technology.

Some of the other big fiber ISPs like AT&T and Vodafone have been evaluating XGS-PON. This technology can deliver 10 Gbps downstream and 2.5 Gbps upstream – a big step up in bandwidth over GPON. The major advantage of the technology is that is uses a fixed laser which is far less complex and costly. And unlike Verizon, these two companies are building a lot more FTTH networks that Verizon.

And while all of this technology is being discussed, ISPs today are already delivering 10 Gbps data pipes to customers using active Ethernet (AON) technology. For example, US Internet in Minneapolis has been offering 10 Gbps residential service for several years. The active Ethernet technology uses lower cost electronics than most PON technologies, but still can have higher costs than GPON due to the fact that there is a dedicated pair of lasers – one at the core and one at the customer site – for each customer. A PON network instead uses one core laser to serve multiple customers.

It may be a number of years until this is resolved because most ISPs building FTTH networks are still happily buying and installing GPON. One ISP client told me that they are not worried about GPON becoming obsolete because they could double the capacity of their network at any time by simply cutting the number of customers on a neighborhood PON in half. That would mean installing more cards in the core without having to upgrade customer electronics.

From what everybody tells me GPON networks are not experiencing any serious problems. But it’s obvious as the household demand for broadband keeps doubling every three years that the day will come when these networks will experience blockages. But creative solutions like splitting the PON could keep GPON working great for a decade or two. And that might make GPON the preferred technology for a long time, regardless of the vendors strong desire to get everybody to pay to upgrade existing networks.

A New PON Technology

ONTNow that many fiber competitors are providing gigabit Ethernet to a lot of customers we have started to stress the capability of the existing passive optical network (PON) technology. The most predominant type of PON network in place today is GPON (gigabit PON). This technology shares 2.5 gigabits of download data among up to 64 homes (although most providers put fewer customers on a PON).

My clients today tell me that their gigabit customers still don’t use much more data than other customers. I liken this to the time when the industry provided unlimited long distance to households and found out that, on the whole, those customers didn’t call a lot more than before. As long as you can’t tell a big difference in usage between a gigabit customer and a 100 Mbps customer, introducing gigabit speeds alone is not going to break a network.

But what does matter is that all customers, in aggregate, are demanding more downloads over time. Numerous studies have shown that the amount of total data demanded by an average household doubles about every three years. With that kind of exponential growth it won’t take long until almost any network will show stress. But added to the inexorable growth of data usage is a belief that, over time, customers with gigabit speeds are going to find applications that use that speed. When gigabit customers really start using gigabit capabilities the current PON technology will be quickly overstressed.

Several vendors have come out with a new PON technology that has been referred to as XGPON or NGPON1. This new technology increases the shared data stream to 10 gigabits. The primary trouble with this technology is that it is neither easily forward nor backward compatible. Upgrading to 10 gigabits means an outlay for new electronics for an only 4 times increase in bandwidth. I have a hard time recommending that a customer with GPON make a spendy upgrade for a technology that is only slightly better. It won’t take more than a decade until the exponential growth of customer demand catches up to this upgrade.

But there is another new alternative. Both Alcatel-Lucent and Huawei have come out with next generation PON technology which uses TWDM (time and wave division multiplexing) to insert multiple PONs onto the same fiber. The first generation of this technology creates four different light pathways using four different ‘colors’ of light. This is effectively the same as a 4-way node split in that it creates a separate PON for the customers assigned to a given color. Even if you had 64 customers on a PON this technology can instead provide four separate PONs of 16 customers. But with 32 customers this becomes an extremely friendly 8 customer per PON.

This new technology is being referred to as NGPON2. Probably the biggest benefit of the technology is that it doesn’t require a forced migration and upgrade to existing customers. Those customers can stay on the existing color while you migrate or add new customers to the new colors. But any existing customer that is moved onto a new PON color would need to have an upgraded ONT. The best feature of the new technology is that it provides a huge upgrade in bandwidth and can provide either 40 Gbps or 80 Gbps download per an existing PON.

This seems like a no brainer for any service provider who wants to offer gigabit as their only product. An all-gigabit network is going to create choke points in a traditional PON network, but as long as the backbone bandwidth to nodes is increased along with this upgrade it ought to handle gigabit customers seamlessly (when they actually start using their gigabit).

The big question is when does a current provider need to consider this kind of upgrade? I have numerous clients who provide 100 Mbps service on PON who are experiencing very little network contention. One strategy some of them are considering with GPON is to place gigabit customers on their own PON and limit the number of customers on each gigabit PON to a manageable number. With creative strategies like this it might be possible to keep GPON running comfortably for a long time. It’s interesting to see PON providers starting to seriously consider bandwidth management strategies. It’s something that the owners of HFC cable networks have had to do for a decade, and it seems that we are getting to the point where even fiber networks can feel stress from bandwidth growth.