Next-generation PON is Here

At some point during the last year, practically every ISP I know that uses PON technology has quietly upgraded to next-generation PON. For now, that mostly means XGS-PON, which can deliver 10 gigabits of bandwidth to a neighborhood. We’re on the verge of seeing even faster PON cards that will be able to deliver 40 gigabits and probably beyond to 100 gigabits.

This is a big upgrade over GPON which delivers 2.5 Gbps download speed to a neighborhood node. In recent years ISPs have been able to use GPON technology to sell reliable gigabit speeds to homes or businesses that share the network in a neighborhood.

We saw a similar upgrade a dozen years ago when the industry upgraded from BPON, which delivered 622 Mbps to a neighborhood – the upgrade to GPON was a 4-fold increase in available bandwidth. Upgrading to XGS-PON is another 4-fold increase. 40-gigabit PON will be another 4-fold increase.

The best thing about the current upgrade to faster PON is that the vendors got smarter this time. I still have clients who were angry that the upgrade from BPON to GPON meant a total replacement of all electronics – even though the vendors had declared that there would be an easy upgrade path from BPON. Many ISPs decided to change vendors for the upgrade to GPON, and I think vendors got the message.

The PON architecture for most vendors allows upgrading some customers to XGS-PON by adding a faster card to an existing GPON platform. This smart kind of upgrade means that ISPs don’t need to make a flash-cut to faster PON but can move customers one at a time or neighborhood by neighborhood. Upgrades to even faster generations of PON are supposed to work in the same way.

The impact of going to GPON was the widespread introduction of gigabit-speed broadband. A decade ago, gigabit broadband was declared by cable companies to be a gimmick – likely because they couldn’t touch gigabit speeds that fast at the time. But now, all large cable companies are successfully selling gigabit products. According to the latest report from OpenVault, a quarter of homes now subscribe to gigabit or faster broadband products and almost 20% of homes regularly use more than a terabyte of data in a month.

We’ve already seen changes in the market due to next-generation PON. I know a number of ISPs that are now selling 2 Gbps and 5 Gbps broadband products using the new technology. A few are now offering 10 Gbps connections.

One of the biggest decisions faced by an ISP is how many customers to load onto a single PON card at the chassis. GPON allowed for putting up to 128 customers on a PON card, but most ISPs I know only loaded 32 customers. While this was a conservative decision, it meant there was a lot of safety so that customers almost always get the bandwidth they subscribe to.

It’s possible to load a lot more customers onto an XGS-PON card. Most of my clients are still configuring with 32 customers per card, although I’m now seeing a few ISP load 48 or 64 customers per card. There is enough bandwidth on a 10-gigabit card to give everybody a gigabit product, even with a higher customer counts, except perhaps in business districts where there might be some customers using a lot of bandwidth all of the time. The main consideration for loading extra customers on a card is the consequence of a bad card knocking out a greater number of customers.

While you never hear them talking about it, the widespread introduction of XGS-PON is one of the driving factors behind cable companies scrambling to upgrade to faster bandwidth. While the cable companies initially scoffed at gigabit speeds on GPON, I think they’ve learned that claims of faster speeds by fiber ISPs have convinced the public that fiber is superior, even when a cable company can match fiber speeds.

The race for faster technologies is clearly on. Many industry skeptics still scoff that people don’t need faster speeds – but ISPs have learned that people will buy it. That’s a fact that is hard to argue with.

Is it Time to Say Farewell to GPON?

GPON is a great technology, GPON stands for gigabit passive optical network, and it is the predominant technology in place that is delivering fiber last mile broadband. The GPON standard was first ratified in 2003, but like most new technologies, it took a few years to hit the market.

GPON quickly became popular because it allowed the provisioning of a gigabit service to customers. A GPON link delivers 2.4 gigabits downstream and 1.2 gigabits upstream to serve up to 64 customers, although most networks I’ve seen don’t deliver to more than 32 customers.

There is still some disagreement among ISPs about the best last-mile fiber technology, and some ISPs still favor active Ethernet networks. The biggest long-term advantage of GPON is that the technology serves more customers than active Ethernet, and most of the R&D for last-mile fiber over the past decade has gone to PON technology.

There are a few interesting benefits of GPON versus active Ethernet. One of the most important is the ability to serve multiple customers on a single feeder fiber. PON has one laser at a hub talking to 32 or more customers. This means a lot less fiber is needed in the network. The other advantage of PON that ISPs like is that there are no active electronics in the network – electronics are only at hubs and at the customer. That’s a lot fewer components to go bad and a less repairs to make in the field.

We’re now seeing most new fiber designs using XGS-PON. This technology increases bandwidth and delivers a symmetrical 10-gigabit path to a neighborhood (for purists, it’s actually 9.953 gigabits). The technology can serve up to 256 customers on a fiber, although most ISPs will serve fewer than that.

The biggest advantage of XGS-PON is that the electronics vendors have all gotten smarter, and XGS-PON is being designed as an overlay onto GPON networks. An ISP can slip an XGS_PON card into an existing GPON chassis and instantly provision customers with faster broadband. The faster speeds just require an upgraded ONT – the electronics at the customer location.

The vendors did this because they took a lot of grief from the industry when they converted from the earlier BPON or APON to GPON. The GPON electronics were incompatible with older PON, and it required a forklift upgrade, meaning a replacement of all electronics from the core to the customer for the upgrade. I helped a few clients through the BPON to GPON upgrade, and it was a nightmare, with staff working late nights since neighborhood networks had to be taken out of service one at a time to make the upgrade.

The other interesting aspect of XGS-PON is that the technology is also forward-looking. The vendors are already field-testing 25-gigabit cards and are working on 40-gigabit cards in the lab. A fiber network provisioned with XGS-PON has an unbelievable capacity, and with new cards added is going to make networks ready for the big bandwidth needs of the future. Any talk of having online virtual reality and telepresence can’t happen until ISPs can provision multi-gigabit connections to multiple homes in a neighborhood – something that would stress even a 10-gigabit XGS-PON connection.

XGS-PON is going to quickly open up a new level of speed competition. I have one new ISP client using XGS-PON that has three broadband products with download speeds of 1, 2, and 5 gigabits, all with an upload speed of 1 gigabit. The cable companies publicly say they are not worried about fiber competition, but they are a long way away from competing with those kinds of speeds.

I’m sure GPON will be around for years to come. But as happens with all technology upgrades, there will probably come a day when the vendors stop supporting old GPON cards and ONTs. The good news for ISPs is that I have a lot of clients that have GPON connections that have worked for over a decade without a hiccup, and there is no rush to replace something that is working great.

Deploying 10-Gigabit PON

From a cost perspective, we’re not seeing any practical difference between the price of XGS-PON that offers a 10-gigabit data path and traditional GPON. I have a number of clients now installing XGS-PON, and we now recommend it for new fiber projects. I’ve been curious about how ISPs are going to deploy the technology in residential and small-business neighborhoods.

GPON has been the technology of choice for well over a decade. The GPON technology delivers a download path of 2.4-gigabits bandwidth to each neighborhood PON. Most of my clients have deployed GPON in groups of up to 32 customers in a neighborhood PON. In practical deployment, most of them pack a few less than 32 onto the typical GPON card.

I’m curious about how ISPs will deploy XGS-PON. From a pure math perspective, an XGS-PON network delivers four times as much bandwidth to each neighborhood than GPON. An ISP could maintain the same level of service to customers as GPON by packing 128 customers onto each GPON card. But network engineering is never that nicely linear, and there are a number of factors to consider when designing a new network.

All ISPs rely on oversubscription when deciding the amount of bandwidth needed for a given portion of a network. Oversubscription is shorthand for taking advantage of the phenomenon that customers in a given neighborhood rarely all use the bandwidth they’ve been assigned, and never all use it at the same time. Oversubscription allows an ISP to feel safe in selling gigabit broadband to 32 customers in a GPON network and knowing that collectively they will not ask to use more than 2.4 gigabits at the same time. For a more detailed description of oversubscription, see this earlier blog. There are ISPs today that put 64 customers or more on a GPON card – the current capacity is up to 128 customers. ISPs understand that putting too many customers on a PON card will start to emulate the poor behavior we see in cable company networks that sometimes bog down at busy times.

Most GPON networks today are not overstressed. Most of my clients tell me that they can comfortably fit 32 customers onto a GPON card and only rarely see a neighborhood maxed out in bandwidth. But ISPs do sometimes see a PON that gets overstretched if there are more than a few heavy users in the same PON. The easiest solution to that issue today is to reduce the number of customers in a busy PON – such as splitting into two 16-customer PONs. This isn’t an expensive issue because over-busy PONs are still a rarity.

ISPs understand that year after year that customers are using more bandwidth and engaging in more data-intensive tasks. Certainly, a PON with half a dozen people now working from home is a lot busier than it was before the pandemic. It might be years before a lot of neighborhood PONs get overstressed, but eventually, the growth in bandwidth demand will catch up to the GPON capacity. As a reminder, the PON engineering decision is based on the amount of demand at the busiest times of the day. That busy hour level of traffic is not growing as quickly as the overall level of bandwidth used by homes – which more than doubled in just the last three years.

There are other considerations in designing XGS-PON. Today, the worst that can happen with a PON failure is for 32 customers to lose bandwidth if a PON card fails. It feels riskier from a business perspective to have 128 customers sharing a PON card – that’s a much more significant network outage.

There is no magic metric for an ISP to use. You can’t fully trust vendors because they are going to sell more PON cards if an ISP were to be extremely conservative and put only 32 customers on a 10-gigabit PON. The ISP owners might not feel comfortable leaping to 128 or more customers on a PON. There are worse decisions to have to make because almost any configuration of PON oversubscription will work on a 10-gigabit network. The right solution will balance the need to make sure that customers get the bandwidth they request without being so conservative that the PON cards are massively underutilized. Over time, ISPs will develop internal metrics that work with their service philosophy and the demands of their customer base.

25-Gigabit PON

The industry has barely broken ground on 10-gigabit PON technology in terms of market deployments, and the vendors in the industry have already moved on to 25-gigabit PON technology. I know a few ISPS that are exclusively deploying 10-gigabit XGS-PON, but most ISPs are still deploying the fifteen-year-old GPON technology.

As a short primer, PON (passive optical network) technology is a last-mile technology that uses one laser in a core location to communicate with multiple customers. In the U.S., most ISPs don’t deploy GPON to more than 32 customers. The passive name in the technology is due to the fact that there are no electronics in the network between the core laser and the customer lasers. GPON technology delivers 2.4 Gbps of bandwidth to a PON (a group of customers connected to the same core laser). The upgrade to XGS-PON brings something close to 10 Gbps to PON, while the 25-GS-PON will bring 25 Gbps.

The PON technology is being championed by the 25-GS-PON MSA (multisource agreement) Group that has come together to create a standard specification for the 25-gigabit technology. It’s worth a glance at their website because it’s a virtual who’s-who of large ISPs, chip manufacturers, and electronics vendors.

I’m not hearing a lot of complaints yet about ISPs who are seeing GPON technology being overwhelmed in residential neighborhoods. I’ve asked recently, and most of the small ISPs I queried told me that individual neighborhood PONs average about 40% utilization, meaning that 40% of the bandwidth to customers is being used at the same time. ISPs start to get worried when utilization starts routinely crossing 80%, and ideally, ISPs never want to hit 100% utilization, which is when customers start getting blocked.

The cellular carriers were the first champions of 10-gigabit PON technology. This is the most affordable way to bring multi-gigabit speeds to small cell sites. The network owner can deploy a 10-gigabit core and communicate with multiple small cell sites without needing the extra field electronics used in a Metro Ethernet network. The 25-gigabit technology is aimed at cell sites and other large bandwidth users.

The technology is smartly being designed as an overlay onto existing GPON and XGS-PON deployments. In an overlay network, a GPON owner can continue to operate GPON for residential networks, can operate XGS-PON for a PON of businesses with larger bandwidth requirements. The 25GS-PON would be used for the real heavy hitters or perhaps to create a private network between locations in a market.

I’ve been thinking about the benefits of 25GS-PON over the other current GPON technologies.

  • This is a cheaper technology than the alternatives. The MSA group has designed this to be a natural progression beyond GPON and XGS-PON. That means most of the components of the technology benefit from the huge manufacturing economy of scale for PON technology. If 25G-PON costs are low enough, this could spell the eventual end of Metro Ethernet as a technology.
  • It’s a great way to bring big bandwidth to multiple customers in the same part of a network. This technology can supply bandwidth to small cell sites that wasn’t imaginable just a few years ago.
  • The technology is easy to add to an existing network by sliding a new card into a compatible PON chassis. That means no new racks in data centers or new shelves in huts.

Electronics manufacturers have been frustrated by how long the GPON technology has remained viable – and in many applications might be good for years to come. Telecom manufacturers thrived in the past when there was a full replacement and upgrade of electronics needed every seven years. Designing 25-gigabit PON as an overlay is an acknowledgment that upgrades in the future are going to be incremental, and upgrades that don’t overlay onto existing technologies will likely be shunned. ISPs are not interested in rip and replace technologies.

The 25GS-PON technology might become commercially available as early as the end of 2022. There have already been field trials of the technology. After that, the vendors will move on to the next PON upgrade. There’s already talk of whether the next generation should be 40-gigabit or 100-gigabit.

A Strategy for Upgrading GPON

I’ve been asked a lot during 2018 if fiber overbuilders ought to be considering the next generation of PON technology that might replace GPON. They hear about the newer technologies from vendors and the press. For example, Verizon announced a few months ago that they would begin introducing Calix NGPON2 into their fiber network next year. The company did a test using the technology recently in Tampa and achieved 8 Gbps speeds. AT&T has been evaluating the other alternate technology, XGS-PON, and may be introducing it into their network in 2019.

Before anybody invests a lot of money in a GPON network it’s a good idea to always ask if there are better alternatives – as should be done for every technology deployed in the network.

One thing to consider is how Verizon plans on using NGPON2. They view this as the least expensive way to deliver bandwidth to a 5G network that consists of multiple small cells mounted on poles. They like PON technology because it accommodates multiple end-points using a single last-mile fiber, meaning a less fiber-rich network than with other 10-gigabit technologies. Verizon also recently began the huge task of consolidating their numerous networks and PON gives them a way to consolidate multi-gigabit connections of all sorts onto a single platform.

Very few of my clients operate networks that have a huge number of 10-gigabit local end points. Anybody that does should consider Verizon’s decision because NGPON2 is an interesting and elegant solution for handling multiple large customer nodes while also reducing the quantity of lit fibers in the network.

Most clients I work with operate PON networks to serve a mix of residential and business customers. The first question I always ask them is if a new technology will solve an existing problem in their network. Is there anything that a new technology can do that GPON can’t do? Are my clients seeing congestion in neighborhood nodes that are overwhelming their GPON network?

Occasionally I’ve been told that they want to provide faster connections to a handful of customers for which the PON network is not sufficient – they might want to offer dedicated gigabit or larger connections to large businesses, cell sites or schools. We’ve always recommended that clients design networks with the capability of large Ethernet connections external to the PON network. There are numerous affordable technologies for delivering a 10-gigabit pipe directly to a customer with active Ethernet. It seems like overkill to consider upgrading the electronics to all customers to satisfy the need of a few large customers rather than overlaying a second technology into the network. We’ve always recommended that networks have some extra fiber pairs in every neighborhood exactly for this purpose.

I’ve not yet heard an ISP tell me that they are overloading a residential PON network due to customer data volumes. This is not surprising. GPON was introduced just over a decade ago, and at that time the big ISPs offered speeds in the range of 25 Mbps to customers. GPON delivers 2.4 gigabits to up to 32 homes and can easily support residential gigabit service. At the time of introduction GPON was at least a forty-times increase in customer capacity compared to DSL and cable modems – a gigantic leap forward in capability. It takes a long time for consumer household usage to grow to fill that much new capacity. The next biggest leap forward we’ve seen was the leap from dial-up to 1 Mbps DSL – a 17-times increase in capacity.

Even if somebody starts reaching capacity on a GPON there are some inexpensive upgrades that are far less expensive than upgrading to a new technology. A GPON network won’t reach capacity evenly and would see it in some neighborhood nodes first. The capacity in a neighborhood GPON node can easily be doubled by cutting the size of the node in half by splitting it to two PONs. I have one client that did the math and said that as long as they can buy GPON equipment they would upgrade by splitting a few times – from 32 to 16 homes and from 16 homes to 8 homes, and maybe even from 8 to 4 customers before they’d consider tearing out GPON for something new. Each such split doubles capacity and splitting nodes three times would be an 8-fold increase in capacity. If we continue on the path of seeing household bandwidth demand double every three years, then splitting nods twice would easily add more than another decade to the life of a PON network. In doing that math it’s important to understand that splitting a node actually more than doubles capacity because it also decreases the oversubscription factor for each customer on the node.

AT CCG we’ve always prided ourselves on being technology neutral and vendor neutral. We think network providers should use the technology that most affordably fits the needs of their end users. We rarely see a residential fiber network where GPON is not the clear winner from a cost and performance perspective. We have clients using numerous active Ethernet technologies that are aimed at serving large businesses or for long-haul transport. But we are always open-minded and would easily recommend NGPON2 or XGS-PON if it is the best solution. We just have not yet seen a network where the new technology is the clear winner.

Predicting Broadband Usage on Networks

One of the hardest jobs these days is being a network engineer who is trying to design networks to accommodate future broadband usage. We’ve known for years that the amount of data used by households has been doubling every three years – but predicting broadband usage is never that simple.

Consider the recent news from OpenSource, a company that monitors usage on wireless networks. They report a significant shift in WiFi usage by cellular customers. Over the last year AT&T and Verizon have introduced ‘unlimited’ cellular plans and T-Mobile has pushed their own unlimited plans harder in response. While the AT&T and Verizon plans are not really unlimited and have caps a little larger than 20 GB per month, the introduction of the plans has changed the mindset of numerous users who no longer automatically seek WiFi networks.

In the last year the percentage of WiFi usage on the Verizon network fell from 54% to 51%; on AT&T from 52% to 49%, and on T-Mobile from 42% to 41%. Those might not sound like major shifts, but for the Verizon network it means that the cellular network saw an unexpected additional 6% growth in data volumes in one year over what the company might normally have expected. For a network engineer trying to make sure that all parts of the network are robust enough to handle the traffic this is a huge change and means that chokepoints in the network will appear a lot sooner than expected. In this case the change to unlimited plans is something that was cooked-up by marketing folks and it’s unlikely that the network engineers knew about it any sooner than anybody else.

I’ve seen the same thing happen with fiber networks. I have a client who built one of the first fiber-to-the-home networks and use BPON, the first generation of electronics. The network was delivering broadband speeds of between 25 Mbps and 60 Mbps, with most customers in the range of 40 Mbps.

Last year the company started upgrading nodes to the newer GPON technology, which upped the potential customer speeds on the network to 1 gigabit. The company introduced both a 100 Mbps product and a gigabit product, but very few customers immediately upgraded. The upgrade meant changing the electronics at the customer location, but also involved a big boost in the size of the data pipes between neighborhood nodes and the hub.

The company was shocked to see data usage in the nodes immediately spike upward between 25% and 40%. After all, they had not arbitrarily increased customer speeds across-the-board, but had just changed the technology in the background. For the most part customers had no idea they had been upgraded – so the spike can’t be contributed to a change in customer behavior like what happened to the cellular companies after introducing unlimited data plans.

However, I suspect that MUCH of the increased speeds still came from changed customer behavior. While customers were not notified that the network had been upgraded, I’m sure that many customers noticed the change. The biggest trend we see in household broadband demand over the last two years is the desire by households to utilize multiple big data streams at the same time. Before the upgrades households were likely restricting their usage by not allowing kids to game or do other large bandwidth activities while the household was video streaming or doing work. After the upgrade they probably found they no longer had to self-monitor and restrict usage.

In addition to this likely change in customer behavior the spikes in traffic also were likely due to correcting bottlenecks in the older fiber network that the company had never recognized or understood. I know that there is a general impression in the industry that fiber networks don’t see the same kind of bottlenecks that we expect in cable networks. In the case of this network, a speed test on any given customer generally showed a connection to the hub at the speeds that customers were purchasing – and so the network engineers assumed that everything was okay. There were a few complaints from customers that their speeds bogged down in the evenings, but such calls were sporadic and not widespread.

The company decided to make the upgrade because the old electronics were no longer supported by the vendor and they also wanted to offer faster speeds to increase revenues. They were shocked to find that the old network had been choking customer usage. This change really shook the engineers at the company and they feared that the broadband growth curve was going to now be at the faster rate. Luckily, within a few months each node settled back down to the historic growth rates. However, the company found itself instantly with network usage they hadn’t expected for at least another year, making them that much closer to the next upgrade.

It’s hard for a local network owner to predict the changes they are going to affect the network utilization. For example, they can’t predict that Netflix will start pushing 4K video. They can’t know that the local schools will start giving homework that involves watching a lot of videos at home. Even though we all understand the overall growth curve for broadband usage, it doesn’t grow in a straight line and there are periods of faster and slower growth along the curve. It’s enough to cause network engineers to go gray a little sooner than expected!

What’s the Next FTTP Technology?

There is a lot of debate within the industry about the direction of the next generation of last mile fiber technology. There are three possible technologies that might be adopted as the preferred next generation of electronics – NG-PON2, XGS-PON or active Ethernet. All of these technologies are capable of delivering 10 Gbps streams to customers.

Everybody agrees that the current widely deployed GPON is starting to get a little frayed around the edges. That technology delivers 2.4 Gbps downstream and 1 Gbps upstream for up to 32 customers, although most networks I work with are configured to serve 16 customers at most. All the engineers I talk to think this is still adequate technology for residential customers and I’ve never heard of a neighborhood PON being maxed out for bandwidth. But many ISPs already use something different for larger business customers that demand more bandwidth than a PON can deliver.

The GPON technology is over a decade old, which generally is a signal to the industry to look for the next generation replacement. This pressure usually starts with vendors who want to make money pushing the latest and greatest new technology – and this time it’s no different. But after taking all of the vendor hype out of the equation it’s always been the case that any new technology is only going to be accepted once that new technology achieves and industry-wide economy of scale. And that almost always means being accepted by at least one large ISP. There are a few exceptions to this, like what happened with the first generation of telephone smart switches that found success with small telcos and CLECs first – but most technologies go nowhere until a vendor is able to mass manufacture units to get the costs down.

The most talked about technology is NG-PON2 (next generation passive optical network). This technology works by having tunable lasers that can function at several different light frequencies. This would allow more than one PON to be transmitted simultaneously over the same fiber, but at different wavelengths. But that makes this a complex technology and the key issue is if this can ever be manufactured at price points that can match other alternatives.

The only major proponent of NG-PON2 today is Verizon which recently did a field trial to test the interoperability of several different vendors including Adtran, Calix, Broadcom, Cortina Access and Ericsson. Verizon seems to be touting the technology, but there is some doubt if they alone can drag the rest of the industry along. Verizon seems enamored with the idea of using the technology to provide bandwidth for the small cell sites needed for a 5G network. But the company is not building much new residential fiber. They announced they would be building a broadband network in Boston, which would be their first new construction in years, but there is speculation that a lot of that deployment will use wireless 60 GHz radios instead of fiber for the last mile.

The big question is if Verizon can create an economy of scale to get prices down for NG-PON2. The whole industry agrees that NG-PON2 is the best technical solution because it can deliver 40 Gbps to a PON while also allowing for great flexibility in assigning different customers to different wavelengths. But the best technological solution is not always the winning solution and the concern for most of the industry is cost. Today the early NG-PON2 electronics is being priced at 3 – 4 times the cost of GPON, due in part to the complexity of the technology, but also due to the lack of economy of scale without any major purchaser of the technology.

Some of the other big fiber ISPs like AT&T and Vodafone have been evaluating XGS-PON. This technology can deliver 10 Gbps downstream and 2.5 Gbps upstream – a big step up in bandwidth over GPON. The major advantage of the technology is that is uses a fixed laser which is far less complex and costly. And unlike Verizon, these two companies are building a lot more FTTH networks that Verizon.

And while all of this technology is being discussed, ISPs today are already delivering 10 Gbps data pipes to customers using active Ethernet (AON) technology. For example, US Internet in Minneapolis has been offering 10 Gbps residential service for several years. The active Ethernet technology uses lower cost electronics than most PON technologies, but still can have higher costs than GPON due to the fact that there is a dedicated pair of lasers – one at the core and one at the customer site – for each customer. A PON network instead uses one core laser to serve multiple customers.

It may be a number of years until this is resolved because most ISPs building FTTH networks are still happily buying and installing GPON. One ISP client told me that they are not worried about GPON becoming obsolete because they could double the capacity of their network at any time by simply cutting the number of customers on a neighborhood PON in half. That would mean installing more cards in the core without having to upgrade customer electronics.

From what everybody tells me GPON networks are not experiencing any serious problems. But it’s obvious as the household demand for broadband keeps doubling every three years that the day will come when these networks will experience blockages. But creative solutions like splitting the PON could keep GPON working great for a decade or two. And that might make GPON the preferred technology for a long time, regardless of the vendors strong desire to get everybody to pay to upgrade existing networks.

A New PON Technology

ONTNow that many fiber competitors are providing gigabit Ethernet to a lot of customers we have started to stress the capability of the existing passive optical network (PON) technology. The most predominant type of PON network in place today is GPON (gigabit PON). This technology shares 2.5 gigabits of download data among up to 64 homes (although most providers put fewer customers on a PON).

My clients today tell me that their gigabit customers still don’t use much more data than other customers. I liken this to the time when the industry provided unlimited long distance to households and found out that, on the whole, those customers didn’t call a lot more than before. As long as you can’t tell a big difference in usage between a gigabit customer and a 100 Mbps customer, introducing gigabit speeds alone is not going to break a network.

But what does matter is that all customers, in aggregate, are demanding more downloads over time. Numerous studies have shown that the amount of total data demanded by an average household doubles about every three years. With that kind of exponential growth it won’t take long until almost any network will show stress. But added to the inexorable growth of data usage is a belief that, over time, customers with gigabit speeds are going to find applications that use that speed. When gigabit customers really start using gigabit capabilities the current PON technology will be quickly overstressed.

Several vendors have come out with a new PON technology that has been referred to as XGPON or NGPON1. This new technology increases the shared data stream to 10 gigabits. The primary trouble with this technology is that it is neither easily forward nor backward compatible. Upgrading to 10 gigabits means an outlay for new electronics for an only 4 times increase in bandwidth. I have a hard time recommending that a customer with GPON make a spendy upgrade for a technology that is only slightly better. It won’t take more than a decade until the exponential growth of customer demand catches up to this upgrade.

But there is another new alternative. Both Alcatel-Lucent and Huawei have come out with next generation PON technology which uses TWDM (time and wave division multiplexing) to insert multiple PONs onto the same fiber. The first generation of this technology creates four different light pathways using four different ‘colors’ of light. This is effectively the same as a 4-way node split in that it creates a separate PON for the customers assigned to a given color. Even if you had 64 customers on a PON this technology can instead provide four separate PONs of 16 customers. But with 32 customers this becomes an extremely friendly 8 customer per PON.

This new technology is being referred to as NGPON2. Probably the biggest benefit of the technology is that it doesn’t require a forced migration and upgrade to existing customers. Those customers can stay on the existing color while you migrate or add new customers to the new colors. But any existing customer that is moved onto a new PON color would need to have an upgraded ONT. The best feature of the new technology is that it provides a huge upgrade in bandwidth and can provide either 40 Gbps or 80 Gbps download per an existing PON.

This seems like a no brainer for any service provider who wants to offer gigabit as their only product. An all-gigabit network is going to create choke points in a traditional PON network, but as long as the backbone bandwidth to nodes is increased along with this upgrade it ought to handle gigabit customers seamlessly (when they actually start using their gigabit).

The big question is when does a current provider need to consider this kind of upgrade? I have numerous clients who provide 100 Mbps service on PON who are experiencing very little network contention. One strategy some of them are considering with GPON is to place gigabit customers on their own PON and limit the number of customers on each gigabit PON to a manageable number. With creative strategies like this it might be possible to keep GPON running comfortably for a long time. It’s interesting to see PON providers starting to seriously consider bandwidth management strategies. It’s something that the owners of HFC cable networks have had to do for a decade, and it seems that we are getting to the point where even fiber networks can feel stress from bandwidth growth.