Categories
The Industry

It’s the ISP, Not Just the Technology

Davis Strauss recently wrote an article for Broadband Breakfast that reminded me that good technology does not always mean a good ISP. There are great and not so great ISPs using every technology on the market. Mr. Strauss lists a few of the ways that an ISP can cut costs when building and operating a fiber network that are ultimately detrimental to customers. Following are a few examples.

Redundant Backhaul. Many of the BEAD grants will be built in areas where the existing broadband networks fail regularly due to cuts in the single fiber backhaul feeding the area. I hear stories all of the time of folks who lose broadband regularly for a few days at a time and sometimes much longer. Building fiber last-mile will not solve the backhaul issue if the new network relies on the same unreliable backhaul.

Oversubscription. It’s possible to overload a local fiber network just like any other network if an ISP loads more customers into a network node that can be supported by the bandwidth supplying the node. There are multiple places where a fiber network can get overstressed, including the path between the core and neighborhoods and the path from the ISP to the Internet.

Lack of Spares. Fiber circuit cards and other key components in the network fail just like any other electronics. A good ISP will have spare cards within easy reach to be able quickly restore a network in the case of a component failure. An ISP that cuts corners by not stocking spares can have multi-day outages while a replacement is located.

Poor Network Records. This may not sound like an important issue  but it’s vital for good customer service and network maintenance. Fiber wires are tiny and are not easy for a field technician to identify if there are not great records that match a given fiber to a specific customer. There is an upfront effort and cost required to organize records, and an ISP that skimps on record keeping will be forever disorganized and take longer to perform routine repairs and maintenance.

Not Enough Technicians. Possibly the most important issue in maintaining a good network is to have enough technicians to support the network. The big telcos have historically understaffed field technicians which has resulted in customers waiting days or weeks just to have a technician respond to a problem. ISPs can save a lot of money by running a too-lean staff to the detriment of customers.

Inadequate Monitoring. ISPs that invest in good network monitoring can head off a huge percentage of customer problems by reacting to network issues before customers even realize there is a problem. A huge percentage of network problems can be remedied remotely by a skilled technician if the ISP is monitoring the performance of all segments of a network.

These are just a few examples of ways that ISPs can cut corners. It is these behind-the-scenes decisions on how to operate that differentiate good and poor ISPs. Mr. Strauss doesn’t come right out and say it, but his article implies that there will be ISPs chasing the giant BEAD funding that will be in the business of maximizing profits early to be able to flip the business. An ISP with this mentality is not going to spend money on redundant backhaul, record-keeping, spares, or network monitoring. An ISP with this mentality will hope that a new fiber network can eke by without the extra spending. They might even be right about this for a few years, but eventually, taking short cuts always comes back to cost more than doing things the right way.

We already know that some ISPs cut corners, because we’ve seen them for the last several decades. The big telcos will declare loudly that DSL network perform badly because of the aging of the networks. There is some truth in that, but there are other ISPs still operating DSL networks that perform far better. The rural copper networks of big telcos perform so poorly because the big telcos cut every cost possible. They eliminated technicians, didn’t maintain spare inventories, and invested nothing in additional backhaul.

I honestly don’t know how a state broadband office is going to distinguish between an ISP that will do things right and one that will cut corners – that’s not the sort of thing that can be captured in a grant application since every ISP will say it plans to do a great job and will offer superlative customer service.

Categories
Technology

A Strategy for Upgrading GPON

I’ve been asked a lot during 2018 if fiber overbuilders ought to be considering the next generation of PON technology that might replace GPON. They hear about the newer technologies from vendors and the press. For example, Verizon announced a few months ago that they would begin introducing Calix NGPON2 into their fiber network next year. The company did a test using the technology recently in Tampa and achieved 8 Gbps speeds. AT&T has been evaluating the other alternate technology, XGS-PON, and may be introducing it into their network in 2019.

Before anybody invests a lot of money in a GPON network it’s a good idea to always ask if there are better alternatives – as should be done for every technology deployed in the network.

One thing to consider is how Verizon plans on using NGPON2. They view this as the least expensive way to deliver bandwidth to a 5G network that consists of multiple small cells mounted on poles. They like PON technology because it accommodates multiple end-points using a single last-mile fiber, meaning a less fiber-rich network than with other 10-gigabit technologies. Verizon also recently began the huge task of consolidating their numerous networks and PON gives them a way to consolidate multi-gigabit connections of all sorts onto a single platform.

Very few of my clients operate networks that have a huge number of 10-gigabit local end points. Anybody that does should consider Verizon’s decision because NGPON2 is an interesting and elegant solution for handling multiple large customer nodes while also reducing the quantity of lit fibers in the network.

Most clients I work with operate PON networks to serve a mix of residential and business customers. The first question I always ask them is if a new technology will solve an existing problem in their network. Is there anything that a new technology can do that GPON can’t do? Are my clients seeing congestion in neighborhood nodes that are overwhelming their GPON network?

Occasionally I’ve been told that they want to provide faster connections to a handful of customers for which the PON network is not sufficient – they might want to offer dedicated gigabit or larger connections to large businesses, cell sites or schools. We’ve always recommended that clients design networks with the capability of large Ethernet connections external to the PON network. There are numerous affordable technologies for delivering a 10-gigabit pipe directly to a customer with active Ethernet. It seems like overkill to consider upgrading the electronics to all customers to satisfy the need of a few large customers rather than overlaying a second technology into the network. We’ve always recommended that networks have some extra fiber pairs in every neighborhood exactly for this purpose.

I’ve not yet heard an ISP tell me that they are overloading a residential PON network due to customer data volumes. This is not surprising. GPON was introduced just over a decade ago, and at that time the big ISPs offered speeds in the range of 25 Mbps to customers. GPON delivers 2.4 gigabits to up to 32 homes and can easily support residential gigabit service. At the time of introduction GPON was at least a forty-times increase in customer capacity compared to DSL and cable modems – a gigantic leap forward in capability. It takes a long time for consumer household usage to grow to fill that much new capacity. The next biggest leap forward we’ve seen was the leap from dial-up to 1 Mbps DSL – a 17-times increase in capacity.

Even if somebody starts reaching capacity on a GPON there are some inexpensive upgrades that are far less expensive than upgrading to a new technology. A GPON network won’t reach capacity evenly and would see it in some neighborhood nodes first. The capacity in a neighborhood GPON node can easily be doubled by cutting the size of the node in half by splitting it to two PONs. I have one client that did the math and said that as long as they can buy GPON equipment they would upgrade by splitting a few times – from 32 to 16 homes and from 16 homes to 8 homes, and maybe even from 8 to 4 customers before they’d consider tearing out GPON for something new. Each such split doubles capacity and splitting nodes three times would be an 8-fold increase in capacity. If we continue on the path of seeing household bandwidth demand double every three years, then splitting nods twice would easily add more than another decade to the life of a PON network. In doing that math it’s important to understand that splitting a node actually more than doubles capacity because it also decreases the oversubscription factor for each customer on the node.

AT CCG we’ve always prided ourselves on being technology neutral and vendor neutral. We think network providers should use the technology that most affordably fits the needs of their end users. We rarely see a residential fiber network where GPON is not the clear winner from a cost and performance perspective. We have clients using numerous active Ethernet technologies that are aimed at serving large businesses or for long-haul transport. But we are always open-minded and would easily recommend NGPON2 or XGS-PON if it is the best solution. We just have not yet seen a network where the new technology is the clear winner.

Categories
Technology What Customers Want

The WISP Dilemma

For the last decade I have been working with many rural communities seeking better broadband. For the most part these are places that the large telcos have neglected and never provided with any functional DSL. Rural America has largely rejected the current versions of satellite broadband because of the low data caps and because the latency won’t support streaming video or other real-time activities. I’ve found that lack of broadband is at or near the top of the list of concerns in communities without it.

But a significant percentage of rural communities have access today to WISPs (wireless ISPs) that use unlicensed frequency and point-to-multipoint radios to bring a broadband connection to customers. The performance of WISPs varies widely. There are places where WISPs are delivering solid and reliable connections that average between 20 – 40 Mbps download. But unfortunately there are many other WISPs that are delivering slow broadband in the 1 – 3 Mbps range.

The WISPs that have fast data speeds share two characteristics. They have a fiber connection directly to each wireless transmitter, meaning that there are no bandwidth constraints. And they don’t oversubscribe customers. Anybody who was on a cable modem five or ten years ago understands oversubscription. When there are too many people on a network node at the same time the performance degrades for everybody. A well-designed broadband network of any technology works best when there are not more customers than the technology can optimally serve.

But a lot of rural WISPs are operating in places where there is no easy or affordable access to a fiber backbone. That leaves them with no alternative but to use wireless backhaul. This means using point-to-point microwave radios to get bandwidth to and from a tower.

Wireless backhaul is not in itself a negative issue. If an ISP can use microwave to deliver enough bandwidth to a wireless node to satisfy the demand there, then they’ll have a robust product and happy customers. But the problems start happening when networks include multiple ‘hops’ between wireless towers. I often see WISP networks where the bandwidth goes from tower to tower to tower. In that kind of configuration all of the towers and all of the customers on those towers are sharing whatever bandwidth is sent to the first tower in the chain.

Adding hops to a wireless network also adds latency and each hop means it takes longer for the traffic to get to and from customers at the outer edges of one of these wireless chains. Latency, or time lag, in signal is an important factor in being able to perform real-time functions like data streaming, voice over IP, gaming, or functions like maintaining connections to an on-line class or a distant corporate WAN.

Depending upon the brand of the radios and the quality of the internet backbone connection, a wireless transmitter that is connected directly to fiber can have a latency similar to that of a cable or DSL network. But when chaining multiple towers together the latency can rise significantly, and real-time applications start to suffer at latencies of 100 milliseconds or greater.

WISPs also face other issues. One is the age of the wireless equipment. There is no part of our industry that has made bigger strides over the past ten years than the manufacturing of subscriber microwave radios. The newest radios have significantly better operating characteristics than radios made just a few years ago. WISPs are for the most part relatively small companies and have a hard time justifying upgrading equipment until it has reached its useful life. And unfortunately there is not much opportunity for small incremental upgrades of equipment. The changes in the technologies have been significant enough that that upgrading a node often means replacing the transmitters on towers as well as subscriber radios.

The final dilemma faced by WISPs is that they often are trying to serve customers that are in locations that are not ideally situated to receive a wireless signal. The unlicensed frequencies require good line-of-sight and also suffer degraded signals from foliage, rain and other impediments and it’s hard to serve customer reliably who are surrounded by trees or who live in places that are somehow blocked by the terrain.

All of the various issues mean that reviews of WISPs vary as widely as you can imagine. I was served by a WISP for nearly a decade and since I lived a few hundred feet from the tower and had a clear line-of-sight I was always happy with the performance I received. I’ve talked to a few people recently who have WISP speeds as fast as 50 Mbps. But I have also talked to a lot of rural people who have WISP connections that are slow and have high latency that provides a miserable broadband experience.

It’s going to be interesting to see what happens to some of these WISPs as rural telcos deploy CAF II money and provide a faster broadband alternative that will supposedly deliver at least 10 Mbps download. WISPs who can beat those speeds will likely continue to thrive while the ones delivering only a few Mbps will have to find a way to upgrade or will lose most of their customers.

Categories
Current News Technology

Is There a Web Video Crisis? – Part I

The whole net neutrality issue has been driven by the fact that companies like Comcast and Verizon want to charge large content providers like NetFlix to offset some of their network cost to carry their videos. Comcast implies that without such payments that NetFlix content will have trouble making it to customers. By demanding such payments Comcast is saying that their network is having trouble carrying video, meaning that there is a video crisis on the web or on the Comcast network.

But is there? Certainly video is king and constitutes the majority of traffic on the web today. And the amount of video traffic is growing rapidly as more customers watch video on the web. But everybody has known for years that this is coming and Comcast can’t be surprised that it is being asked to deliver video to people.

Let’s look at this issue first from the edge backwards. Let’s say that on average in a metro area that Comcast has sold a 20 Mbps download to each of its customers. Some buy slower or faster speeds than that, but every one of Comcast’s products is fast enough to carry streaming video. Like all carriers Comcast does something called oversubscription, meaning that they sell more access to customers than their network can supply. But in doing so they are banking on the fact that everybody won’t watch video at the same time. And they are right, it never happens. I have a lot of clients with broadband networks and I can’t think of one of them who has been overwhelmed in recent years by demands from customers on the edge. Those edge networks ought to be robust enough to deliver the speeds that are sold to customers. That is the primary thing customers are paying for.

So Comcast’s issues must be somewhere else in the network, because their connections to customers ought to robust enough to deliver video to a lot of people at the same time. One place that could be a problem is the Internet backbone. This is the connection between Comcast and the Internet. I have no idea how Comcast manages this, but I know how hundreds of smaller carriers do it. They generally buy enough capacity so that they are rarely use more than some base amount like 60% of the backbone. By keeping a comfortable overhead on the Internet pipe they are ready for those rare days when usage bursts much higher. And if they do get too busy they usually have the ability to burst above their proscribed bandwidth limits to satisfy customer demands. This costs them more, but the capacity is available to them without them having to ask for it.

So one would not think that the issue for Comcast is their connection to the Internet. They ought to be sizing this according to the capacity that they are selling in aggregate to all of the end users. The price of backbone has been dropping steadily for years and the price that their customers pay them for bandwidth should be sufficient for them to make the backbone robust enough.

That only leaves one other part of the network, which is what we refer to as distribution. This is the fiber connections that go between a headend or a hub out to neighborhoods. Certainly these connections have gotten larger over time and I would assume that like all carriers that Comcast has had to increase capacity in the distribution plant. Where a neighborhood might have once been perfectly fine sharing a gigabit of data they might now need ten gigabits. That kind of upgrade means upgrading to a larger laser on the fiber connection between the Comcast headend and neighborhood nodes.

Again, I would think that the prices that customers pay ought to cover the cost of the distribution network, just as it ought to cover the edge network and the backbone network. Comcast has been unilaterally increasing speeds to customers over time. They come along and periodically increase speeds for customers, say from 10 to 15 Mbps. One would assume that they would only increase speeds if they have the capacity to actually deliver those new higher speeds.

From the perspective of looking at the base components of the network – the edge, the backbone and the distribution, I can’t see where Comcast should be having a problem. The prices that customers pay ought to be more than sufficient to make sure that those three components are robust enough to deliver what Comcast is selling to customers. If it’s not, then Comcast has oversold their capacity, which sounds like their issue and not NetFlix.

In the next article in this series I will look at other issues such as caching as possible reasons for why Comcast needs extra payments from NetFlix. Because it doesn’t appear to me that NetFlix ought to be responsible for the way Comcast builds their own networks. One would think that their networks are built to generally deliver the bandwidth to customers they have paid for regardless of where on the web that bandwidth is coming from.

Exit mobile version