It’s the ISP, Not Just the Technology

Davis Strauss recently wrote an article for Broadband Breakfast that reminded me that good technology does not always mean a good ISP. There are great and not so great ISPs using every technology on the market. Mr. Strauss lists a few of the ways that an ISP can cut costs when building and operating a fiber network that are ultimately detrimental to customers. Following are a few examples.

Redundant Backhaul. Many of the BEAD grants will be built in areas where the existing broadband networks fail regularly due to cuts in the single fiber backhaul feeding the area. I hear stories all of the time of folks who lose broadband regularly for a few days at a time and sometimes much longer. Building fiber last-mile will not solve the backhaul issue if the new network relies on the same unreliable backhaul.

Oversubscription. It’s possible to overload a local fiber network just like any other network if an ISP loads more customers into a network node that can be supported by the bandwidth supplying the node. There are multiple places where a fiber network can get overstressed, including the path between the core and neighborhoods and the path from the ISP to the Internet.

Lack of Spares. Fiber circuit cards and other key components in the network fail just like any other electronics. A good ISP will have spare cards within easy reach to be able quickly restore a network in the case of a component failure. An ISP that cuts corners by not stocking spares can have multi-day outages while a replacement is located.

Poor Network Records. This may not sound like an important issue  but it’s vital for good customer service and network maintenance. Fiber wires are tiny and are not easy for a field technician to identify if there are not great records that match a given fiber to a specific customer. There is an upfront effort and cost required to organize records, and an ISP that skimps on record keeping will be forever disorganized and take longer to perform routine repairs and maintenance.

Not Enough Technicians. Possibly the most important issue in maintaining a good network is to have enough technicians to support the network. The big telcos have historically understaffed field technicians which has resulted in customers waiting days or weeks just to have a technician respond to a problem. ISPs can save a lot of money by running a too-lean staff to the detriment of customers.

Inadequate Monitoring. ISPs that invest in good network monitoring can head off a huge percentage of customer problems by reacting to network issues before customers even realize there is a problem. A huge percentage of network problems can be remedied remotely by a skilled technician if the ISP is monitoring the performance of all segments of a network.

These are just a few examples of ways that ISPs can cut corners. It is these behind-the-scenes decisions on how to operate that differentiate good and poor ISPs. Mr. Strauss doesn’t come right out and say it, but his article implies that there will be ISPs chasing the giant BEAD funding that will be in the business of maximizing profits early to be able to flip the business. An ISP with this mentality is not going to spend money on redundant backhaul, record-keeping, spares, or network monitoring. An ISP with this mentality will hope that a new fiber network can eke by without the extra spending. They might even be right about this for a few years, but eventually, taking short cuts always comes back to cost more than doing things the right way.

We already know that some ISPs cut corners, because we’ve seen them for the last several decades. The big telcos will declare loudly that DSL network perform badly because of the aging of the networks. There is some truth in that, but there are other ISPs still operating DSL networks that perform far better. The rural copper networks of big telcos perform so poorly because the big telcos cut every cost possible. They eliminated technicians, didn’t maintain spare inventories, and invested nothing in additional backhaul.

I honestly don’t know how a state broadband office is going to distinguish between an ISP that will do things right and one that will cut corners – that’s not the sort of thing that can be captured in a grant application since every ISP will say it plans to do a great job and will offer superlative customer service.

Benefits of Peering

Peering is the process of exchanging Internet traffic directly between networks instead of passing calls through the open Internet. That probably requires a little explanation and an example. Let’s say that you’re at home and you request to look at a website for a bookstore. You do this by typing in a web URL (the name of the website). Regardless of where that bookstore is located – in your town or across the country – your request is routed by your ISP to the open Internet. Every ISP has connections to reach the web that we generically call backhaul in the industry. In plain English, that means a fiber route from the ISP to the Internet.

Some ISPs buy connections directly at the major Internet hubs in places like Kansas City, Washington DC, Atlanta, etc. Many ISPs instead send traffic to a closer point of presence, and the request gets passed on by somebody else to the primary Internet hubs. The major Internet hubs then route the request to the region of the country where the website for the bookstore is hosted. The closest hub to the final destination will hand the request to the ISP that hosts the website. Once your request reaches the website of the bookstore, the process is then duplicated in the reverse direction so that the signal is sent back to you to interact with the website.

Peering is a process that bypasses this normal routing. If your ISP has a peering arrangement with the ISP that hosts the bookstore website, your request would be handed from your ISP directly to that ISP without the intermediate steps of passing through Internet hubs.

There are two major benefits of the peering arrangement. First, it’s faster. There is extra time required to pass through intermediate hubs. In a peering arrangement, the request would go to the ISP that hosts the bookstore website. The bigger advantage is that peering saves money for both ISPs. ISPs must pay to transport traffic to and from the Internet and also pay for the usage at the major Internet hub. When this particular request is sent through a peering arrangement, your ISP avoids paying to use the major Internet hubs.

Peering makes the most sense and saves the most money when it can bypass Internet hubs with large amounts of traffic, so the most common peering arrangements connect with companies that generate a lot of web traffic. The three largest users of bandwidth for residential ISPs are Google, Netflix, and Facebook. All three of those companies are willing to enter into a peering arrangement with an ISP if it saves money. These companies also like peering because it improves performance for users.

Large ISPs probably all peer directly with these large web companies and others. It’s unusual for big web companies to peer directly with a small ISP. However, there are a number of places around the country where small ISPs pool their traffic to peer with the large web companies. These regional peering hubs might be owned by one of the ISPs or perhaps by a third-party.

Peering can save a lot of money. I talked to several of my clients who take advantage of peering, and they claim that peering saves them from sending from 30% to 65% of their traffic through the open Internet – depending on the specific nature of the peering arrangement.

Peering with the large web companies is not free, and an ISP must provide the transport to meet the peering partner. But this still can save a lot of money compared to paying for broadband usage at the major Internet hubs.

There is another kind of peering that is talked about less, but is widely used, which is private peering. Another name for this is creating a private network that bypasses the Internet. One of the biggest examples of this is the Internet2 network, where universities pass large volumes of usage directly between each other without going through the Internet. The federal government has a huge private network for government and military traffic. Many companies pay a for private network between different branches of the company. It’s common for schools in a region to be networked together in a private network.

If an ISP isn’t peering today, it’s worth asking around to see if any peering opportunities are available to you. If you are in a region where none of the small ISPs are peering, it might make sense to work together to create a peering arrangement for the region. All that’s generally needed to justify a peering point is to aggregate enough traffic volume to make it worthwhile to the big web services.

More Details on Starlink

A few months ago Starlink, the satellite broadband company founded by Elon Musk, launched 60 broadband satellites. Since that launch, we’ve learned a few more things about the secretive venture.

We now know more details about the satellites. Each one weighs about 500 pounds. They are thin rectangular boxes like a flat-panel TV. Much of the surface is a solar panel, and each satellite also extends a second solar panel.

Each satellite comes with a krypton-powered ion thruster to use to navigate the satellite into initial orbit and to avoid future debris when necessary. This may sound like a cutting-edge propulsion system, but it’s been around for many years and the tiny engines create a small amount of thrust by shooting out charged ions of the noble gas – not a lot of thrust is needed to move a 500-pound satellite.

It seems the satellites can’t detect nearby space debris, so Starlink instead connects to the Air Force’s Combined Space Operations Center, which tracks the trajectories of all known space debris. The company will direct satellites to avoid known debris.

Probably the most important announcement for readers of this blog is that the company is likely to only compete in rural areas where there are few other broadband alternatives. This was finally admitted by Musk. There has been hopeful speculation in some parts of the industry that the low-orbit satellites would provide a broadband alternative everywhere, thus supplying a new competitor for cable companies. Since widespread competition generally results in lower prices there was hope that satellite broadband would act to make the whole broadband market more competitive.

We already had an inkling that satellite broadband was going to be rural-only when OneWeb, one of the competitors to Starlink, told the FCC that they were likely going to ultimately need about like 1 million wireless licenses for receivers. While that might sound like a huge number, one million satellite connections spread across the US is not creating a major competitor. We also heard the same message when several of the satellite companies talked about eventually having tens of millions of customers worldwide at maturity. Even with multiple satellite companies competing for customers there probably won’t be more than 3 – 4 million satellite broadband customers in the US – that would make a dent but wouldn’t fix the rural broadband gap. This strategy makes sense for the satellite companies since they’ll be able to charge a premium price for rural customers who have no broadband alternative instead of cutting prices to compete with cable companies.

There has still been no discussion from Starlink or the other competitors on broadband speeds or broadband pricing. It’s been nearly impossible to predict the impact of the satellites without understanding data speeds and total download capacity. The physics suggest that backhaul to the satellites will be the critical limiting factor, so it’s possible that there will be monthly data caps or some other way to control consumption.

One of the most interesting unanswered questions is how the satellites will do backhaul. Landline ISPs of any size today control cost and control data volumes by directly peering with the largest sources of broadband demand – being mostly Netflix, Google, Amazon, and Microsoft. As much as 70% of the traffic headed to an ISP is from this handful of destinations. Engineers are wondering how Starlink will handle peering. Will there be backhaul between satellites or will each satellite have a dedicated link to the ground for all data usage? This is a key question when a satellite is passing over a remote area – will it try to find a place within sight of the satellite to connect to the Internet or will data instead be passed between satellite with connections only at a major hub?

Answering that question is harder than might be imagined because these satellites are not stationary. Each satellite continuously orbits the earth and so a given customer will be handed off from one satellite to the next as satellites pass out of the visible horizon. The company says the receivers are about the size of a pizza box and they are not aimed at a given satellite, like what happens with satellite TV – instead, each receiver just has to be generally aimed skyward. It’s hard to think that there won’t be issues for homes living in heavy wooded areas.

One last interesting tidbit is that the satellites are visible to the naked eye. When the recent launch was first completed it was easy to spot the string of 60 satellites before they were dispersed. Astronomers are wondering what this will mean when there are ten thousand satellites filling the sky from the various providers. Elon Musk says he’s working to reduce albedo (the reflection of sunlight) to reduce any problems this might cause with land-based astronomy. But for stargazers this means there will always be multiple visible satellites crossing the sky.