Digital Discrimination

The FCC recently opened a docket, at the prompting of federal legislation, that asks for examples of digital discrimination. The docket asks folks to share stories about how they have had a hard time obtaining or keeping broadband, specifically due to issues related to zip code, income level, ethnicity, race, religion, or national origin.

The big cable companies and telcos are all going to swear they don’t discriminate against anybody for any reason, and every argument they make will be pure bosh. Big corporations, in general, favor more affluent neighborhoods over poor ones. Neighborhoods that don’t have the best broadband networks are likely going to be the same neighborhoods that don’t have grocery stores, gas stations, retail stores, restaurants, banks, hotels, and a wide variety of other kinds of infrastructure investment from big corporations. The big cable companies and telcos are profit-driven and focused on stock prices, and they make many decisions based on the expected return to the bottom line – just like other large corporations.

There is clearly discrimination by ISPs by income level. It’s going to be a lot harder to prove discrimination by ethnicity, race, religion, or national origin, although it’s likely that some stories of this will surface in this docket. But discrimination based on income is everywhere we look. There are two primary types of broadband discrimination related to income – infrastructure discrimination and price discrimination.

Infrastructure discrimination for broadband has been happening for a long time. It doesn’t take a hard look to see that telecom networks in low-income neighborhoods are not as good as those in more affluent neighborhoods. Any telecom technician or engineer can point out a dozen of differences in the quality of the infrastructure between neighborhoods.

The first conclusive evidence of this came years ago from a study that overlaid upgrades for AT&T DSL over income levels, block by block in Dallas. The study clearly showed that neighborhoods with higher incomes got the upgrades to faster DSL during the early 2000s. The differences were stark, with some neighborhoods stuck with first-generation DSL that delivered 1-2 Mbps broadband while more affluent neighborhoods had been upgraded to 20 Mbps DSL or faster.

It’s not hard to put ourselves into the mind of the local AT&T managers in Dallas who made these decisions. The local manager would have been given an annual DSL upgrade budget and would have decided where to spend it. Since there wasn’t enough budget to upgrade everywhere, the local manager would have made the upgrades in neighborhoods where faster cable company competition was taking the most DSL customers – likely the more affluent neighborhoods that could afford the more expensive cable broadband. There were probably fewer customers fleeing the more affordable DSL option in poor neighborhoods where the price was a bigger factor for consumers than broadband speeds.

These same kinds of economic decisions have been played out over and over, year after year by the big ISPs until affluent neighborhoods grew to have better broadband infrastructure than poorer neighborhoods. Consider a few of the many examples of this:

  • I’ve always noticed that there are more underground utilities in newer and more affluent neighborhoods than in older and poorer ones. This puts broadband wires safely underground and out of reach from storm damage – which over time makes a big difference in the quality of the broadband being delivered. Interestingly, the decision of where to force utilities to be underground is done by local governments, and to some degree, cities have contributed to the difference in infrastructure between affluent and low-income neighborhoods.
  • Like many people in the industry, when I go to a new place, I automatically look up at the conditions of poles. While every place is different, there is clearly a trend to have taller and less cluttered poles in more affluent parts of a city. This might be because competition brought more wires to a neighborhood, which meant more make-ready work done to upgrade poles. But I’ve spotted many cases where poles in older and poorer neighborhoods are the worst in a community.
  • It’s easy to find many places where the Dallas DSL story is being replayed with fiber deployment. ISPs of all sizes cherry-pick the neighborhoods that they perceive to have the best earnings potential when they bring fiber to a new market.

We are on the verge of having AI software that can analyze data in new ways. I believe that we’ll find that broadband discrimination against low-income neighborhoods runs a lot deeper than the way we’ve been thinking about it. My guess is that if we map all of the infrastructure related to broadband we’d see firm evidence of the infrastructure differences between poor and more affluent neighborhoods.

I am sure that if we could gather the facts related to the age of the wires, poles, and other infrastructure, we’d find the infrastructure in low-income neighborhoods is significantly older than in other neighborhoods. Upgrades to broadband networks are usually not done in a rip-and-replace fashion but are done by dozens of small repairs and upgrades over time. I also suspect that if you could plot all of the small upgrades done over time to improve networks, you’d find more of these small upgrades, such as replacing cable company power taps and amplifiers, to have been done in more affluent neighborhoods.

We tend to think of broadband infrastructure as the network of wires that brings fast Internet to homes, but modern broadband has grown to be much more than that, and there is a lot of broadband infrastructure that is not aimed at home broadband. Broadband infrastructure has also come to mean small cell sites, smart grid infrastructure, and smart city infrastructure. I believe that if we could map everything related to these broadband investments we’d see more examples of discrimination.

Consider small cell sites. Cellular companies have been building fiber to install small cell sites to beef up cellular networks. I’ve never seen maps of small cell installations, but I would wager that if we mapped all of the new fiber and small cell sites we’d find a bias against low-income neighborhoods.

I hope one day to see an AI-generated map that overlays all of these various technologies against household incomes. My gut tells me that we’d find that low-income neighborhoods will come up short across the board. Low-income neighborhoods will have older wires and older poles. Low-income neighborhoods will have fewer small cell sites. Low-income neighborhoods won’t be the first to get upgraded smart grid technologies. Low-income neighborhoods won’t get the same share of smart city technologies, possibly due to the lack of other infrastructure.

This is the subtle discrimination that the FCC isn’t going to find in their docket because nobody has the proof. I could be wrong, and perhaps I’m just presupposing that low-income neighborhoods get less of every new technology. I hope some smart data guys can find the data to map these various technologies because my gut tells me that I’m right.

Price discrimination has been around for a long time, but I think there is evidence that it’s intensified in recent years. I first noticed price discrimination in the early price wars between the big cable companies and Verizon FiOS. This was the first widespread example of ISPs going head-to-head with decent broadband products where the big differentiator was the price.

I think the first time I heard the term ‘win-back program’ was related to cable companies working hard not to lose customers to Verizon. There are stories in the early days of heavy competition of Comcast keeping customers on the phone for a long time when a customer tried to disconnect service. The cable company would throw all sorts of price incentives to stop customers from leaving to go to Verizon. Over time, the win-back programs grew to be less aggressive, but they are still with us today in markets where cable companies face stiff competition.

I think price competition has gotten a lot more subtle, as witnessed by a recent study in Los Angeles that showed that Charter offers drastically different online prices for different neighborhoods. I’ve been expecting to see this kind of pricing for several years. This is a natural consequence of all of the work that ISPs have done to build profiles of people and neighborhoods. Consumers have always been leery about data gathered about them, and the Charter marketing practices by neighborhood are the natural endgame of having granular data about the residents of LA.

From a purely commercial viewpoint, what Charter is doing makes sense. Companies of all sorts use pricing to reward good existing customers and to lure new customers. Software companies give us a lower price for paying for a year upfront rather than paying monthly. Fast food restaurants, grocery stores, and a wide range of businesses give us rewards for being regular customers.

It’s going to take a whistleblower to disclose what Charter is really doing. But the chances are it has a sophisticated software system that gives a rating for individual customers and neighborhoods based on the likelihood of customers buying broadband or churning to go to somebody else. This software is designed to offer a deeper discount in neighborhoods where price has proven to be an effective technique to keep customers – without offering lower prices everywhere.

I would imagine the smart numbers guy who devised this software had no idea that it would result in blatant discrimination – it’s software that lets Charter maximize revenue by fine-tuning the price according to a computer prediction of what a given customer or neighborhood is willing to pay. There has been a lot of speculation about how ISPs and others would integrate the mounds of our personal data into their businesses, and it looks like it has resulted in finely-tuned price discrimination by city block.

Is There a Fix for Digital Discrimination?

The big news in the broadband industry is that we are in the process of throwing billions of dollars to solve the ultimate case of economic discrimination – the gap between urban and rural broadband infrastructure. The big telcos completely walked away from rural areas as soon as they were deregulated and could do so. The big cable companies never made investments in rural areas due to the higher costs. The difference between urban and rural broadband networks is so stark that we’ve decided to cure decades of economic discrimination by throwing billions of dollars to close the gap.

But nobody has been seriously looking at the more subtle manifestation of the same issue in cities. The FCC is only looking at digital discrimination because it was required by the Infrastructure Act. Does anybody expect that anything will come out of the stories of discrimination? ISPs are going to say that they don’t discriminate. If pinned down, they will say that what looks like discrimination is only the consequence of them making defensible economic decisions and that there was no intention to discriminate.

Most of the discrimination we see in broadband is due to the lack of regulation of ISPs. They are free to chase earnings as their top priority. ISPs have no regulatory mandate to treat everybody the same. The regulators in the country chose to deregulate broadband, and the digital discrimination we see in the market is the direct consequence of that choice. When AT&T was a giant regulated monopoly we required it to charge everybody the same prices and take profits from affluent customers to support infrastructure and prices in low-income neighborhoods and rural places. Regulation wasn’t perfect, but we didn’t have the current gigantic infrastructure and price gaps.

If people decide to respond to this FCC docket, we’ll see more evidence of discrimination based on income. We might even get some smoking gun evidence that some of the discrimination comes from corporate bias based on race and other factors. But discrimination based on income levels is so baked into the ways that corporations act that I can’t imagine that anybody thinks this docket is going to uncover anything we don’t already know.

I can’t imagine that this investigation is going to change anything. The FCC is not going to make big ISPs spend billions to clean up broadband networks in low-income neighborhoods. While Congress is throwing billions at trying to close the rural broadband gap, I think we all understand that anywhere that the big corporations take the rural grant funding that the infrastructure is not going to be maintained properly and that in twenty years we’ll be having this same conversation all over again. We know what is needed to fix this – which is regulation that forces ISPs to do the right thing. But I doubt we’ll ever have the political or regulatory will to force the big ISPs to act responsibly.

Why ISPs Don’t Expand

A lot of smaller ISPs are currently expanding their service footprints. They are often using grant funding to add more service areas and customers, while others are expanding using the more traditional route of borrowing to build new networks. But not all small ISPs are expanding, or are only expanding in small increments. Today I want to talk about the reasons I’ve been given by ISPs that have decided to not expand.

Fear of Being Able to Compete. I’ve talked to a lot of small ISPs who are afraid to compete against the big cable companies. They don’t feel like they can win enough customers to be successful. This is particularly true for ISPs that have only competed in rural areas and are afraid of entering the towns next door.

I generally refer folks with this fear to some of the small companies that have successfully entered larger markets. These companies have learned that fair prices and good service will eventually win over customers, and those customers rarely return to the original giant ISP.

Fear of New Debt. I know some small ISPs who take great pride in having no company debt. They view debt as a burden that weighs down their business. Realistically, debt is a tool. It can provide money to expand today, which can be easily justified if the new net earnings from the expansion are greater than the cost of carrying the debt. I’ve found that it’s generally impossible to talk somebody out of the concept that debt is bad, but since the majority of ISPs carry debt and consider the reasons for the debt to be justified, there is huge market evidence that this fear is irrational.

The real barrier to small companies taking on new debt is that they are likely going to be required to pledge the existing company to guarantee the new debt. That is something that all businesses face, not just ISPs. But this is a risk that some company owners will not accept.

Staff Can’t Handle Change. An interesting reason I hear for not growing is that a business owner/manager feels like the existing staff can’t handle the changes that come with growing. They tell me their staff is set in their ways and is not going to be able to cope with the idea of doing something new. My response to this has always been twofold. First, I’ve found that employees love the challenge of making their company better, and I have seen hundreds of examples where the staff from a small sleepy company stepped up and thrived to grow the company. Second, if your staff is really that inflexible, it might be time to talk about hiring new staff.

Reluctance to Change Habits. Small ISPs have likely used the same processes for many decades, and the idea of having to change the way things are done can be intimidating. In a small ISP, everybody knows their role and knows what they will be doing every day. The idea of disrupting that comfortable work life can be scary since it’s usually clear that the old way of doing things probably won’t work in a competitive environment. The real fear is that the work culture will change – that it won’t be the same company after growth. The chances are that it won’t be the same, but that doesn’t mean that the expanded company can’t still be a great place to work.

Lack of Creativity/Innovation. Some small ISPs have told me that they don’t think they are creative enough to cope with expanding the business. I really have no idea how to respond to this fear. But I am reminded of the old analogy that even the most important person in the world puts their pants on one leg at a time. The fact is, the majority of the tasks involved in entering a new market are almost identical to what an ISP already does today.

What’s interesting about his list is that every reason on it boils down to either fear of the unknown or not wanting to accept risk. What I’ve found is that if an ISP considers these issues from that kind of perspective, they might change their mind about growth. For example, if ISP management asks the question – what am I really afraid of – they might decide that growth isn’t as scary as they feared.  I’ve always recommended that ISPs talk to their peers who have already made the leap to enter new markets to see if their fears are rational. It also is worthwhile doing a financial analysis that shows the worst case – what happens if an ISP enters a market and the wheels come off. I’ve often found that the worse case is not nearly as bad as an ISP feared.

Epic Broadband Outages

Every once in a while I hear a customer story that reinforces the big mistake we made in largely eliminating broadband regulation. This particular story comes from the Chatham News + Record in Chatham County, North Carolina. Some customers there experienced what can only be described as epic outages.

The first outage occurred on October 1 to residents near Charlie Cooper Road from a downed line as the result of hurricane Ian. Duke Power restored power within two days, but it took twenty days for Brightspeed to repair the damage. This is the new incumbent telephone company that purchased the property from CenturyLink. Not to give Brightspeed an excuse, but the outage occurred while the network was still owned by CenturyLink – the sale of the network closed on October 3, two days after the outage. Twenty days is still an extraordinarily a long time to make a line repair, but I’ve been part of the aftermath of the sales of telecom properties, and the first thirty days are often rough on the buyer.

The second outage occurred in the same rural neighborhood on November 28 when a tractor-trailer pulled down wires that were hanging too low. Residents believe that the low wires were a result of a shoddy repair from the hurricane Ian outage. By this time, Brightspeed had owned the company for two months, and it took a full month, until December 27, to restore service.

Customers were highly frustrated because they got no useful response from Brightspeed customer service. There seemed to be no acknowledgment that there was an outage, even as multiple people called multiple times to complain about the outage.

This is not an unusual story in rural America. I’ve talked to dozens of folks who are rural customers of big telcos who have lost broadband for more than a week at a time, and some of them regularly lose service multiple times per year.

The article describes the problems the outages caused for residents. One resident was quoted as saying that broadband access has become as important as having water to the home.

One would think that consumers with this kind of problem could seek relief from the State – that a regulator could intervene to get the telephone company’s attention. When I was first in the industry, a customer complaint that was referred from sent a state commission got an instant priority inside a telephone company.

But a workable complaint process is now a thing of the past. The rules for making a consumer complaint with the North Carolina Utility Commission are a barrier to the average consumer and seem to favor big telcos. It’s not even clear if the NCUC has jurisdiction over broadband – that’s not clear anywhere after the FCC under Ajit Pai walked away from all broadband regulation. The NCUC still lightly regulates telephone service, but it’s not clear if that applies in the same way to broadband.

Regardless of the regulatory issues, the process for filing a complaint is not simple. A consumer must complete an official complaint form and file an original and 15 paper copies – complaints cannot be filed online or by email. The NCUC sends a copy of the complaint to the utility, which must respond in ten days. If the suggested solution from the utility is not adequate, the consumer can either drop the complaint or ask for a formal hearing – which would be an intimidating process for most folks, because the hearing is held in a formal court setting following normal court rules. Not many consumers are going to wade through this highly formal process, which is slanted in favor of utilities and their attorneys and not consumers.

The reality is that consumers have been at the mercy of the big telcos ever since state commissions deregulated telephone service. I’ve heard hundreds of stories over the years of big telcos who have run rough-shod over folks. One of the most common stories I’ve heard in the last few years is of telcos disconnecting DSL rather than trying to fix it.

The first outage for these folks could have slipped through the cracks due to the extraordinary event of the telephone company changing ownership right after the outage. But there is no possible excuse for the second month-long outage. Most of my clients are small ISPs, and they all would have fixed this second outage within a day. I’ve repeatedly cautioned about giving large rural grants to the large telcos, and this outage is one of a thousand reasons not to do so.

How Good is FWA Wireless?

T-Mobile got some bad news recently when the the National Advertising Division (NAD) of BBB National Programs informed T-Mobile that it could not use the words “fast” and “reliable” when advertising for its FWA fixed wireless product that it brands as T-Mobile Home Internet. This ruling came as a result of a complaint from Comcast that T-Mobile is overstating the capabilities of the FWA product in advertising.

Most large carriers belong to the BBB National Programs as a lower cost way of mitigating advertising disputes than lawsuits. ISPS agree to go along with the rulings issued by the group as a condition of joining. However, in this case, T-Mobile is appealing the decision. The news wasn’t all bad for T-Mobile since it was ruled that T-Mobile could continue to advertise that the price of FWA is ‘locked-in” since the company hasn’t raised its rates.

Anybody who has looked closely at the performance of FWA wireless from T-Mobile or Verizon would agree with this ruling. The main reason for the ruling is that the performance of FWA can vary widely. It’s a broadband product that connects to customers from a cell site, and the distance between a customer and the cell site makes a big difference in the speed being delivered. I talked to one customer located near to a T-Mobile tower who was consistently getting over 200 Mbps download and was really pleased with the product. But in this same community, customers only a mile or so away from that same cell tower were getting speeds closer to 50-100 Mbps and were not as happy with the product. A mile further away and speeds were not good at all, and I talked to a farmer who sent the receiver back. In a rural area, a mile isn’t very far, and unless there are a lot of towers, most folks are not getting the advertised fast speeds.

The one consistent feedback I’ve gotten in talking to FWA customers is that speeds vary. This is true for all cellular broadband, and cell phone customers are used to seeing a different number of bars of broadband speed over time from the same location such as home or the office. Cellular data speeds vary for a wide variety of reasons like temperature and weather.

But the biggest reason for the variability is the overall volume of data being demanded from a given cell site at a given moment. Like most broadband products, cellular broadband is a shared data product where the broadband is divvied up among the users at any given time. But unlike landline broadband networks, a cellular company cannot control the number of users at a cell site. Since cell phones are mobile, there is no telling how many people might be demanding a cellular data connection at any given time.

FWA has one more limitation in that the cellular carriers have elected to give first priority to cell phones over FWA customers. This means that when a cell site gets busy, the carrier will choke the delivered data speeds to FWA customers in order to deliver the most speed possible to cellular customers. This makes sense since each big T-Mobile and Verizon have roughly 100 million cellular customers compared to a few million FWA customers. They do not want to make cellular customers unhappy with broadband speeds, and so they throttle FWA when a cell site gets busy.

T-Mobile doesn’t hide this, and the throttling is discussed in the fine print when the product is advertised. But that throttling is part of the reason that T-Mobile can’t describe it’s product as reliable – because at busy times it isn’t.

The big selling point for FWA is the low price and I’m sure the price is what attracted urban customers. The speeds are going to be liked in rural areas where there are no alternatives, but there is definitely a severe distance limitation – in a rural area a 50 Mbps connection might be a big leap up in performance. But the FWA product is a lot slower than cable company broadband. Households who are heavy broadband users might not like the slower speeds and the variability. This ruling is telling T-Mobile that it can’t advertise in a way that makes FWA sound like an equivalent alternative to cable or fiber broadband, because it isn’t. It’s going to be interesting to see how T-Mobile adjusts it’s advertising after this ruling.

Safe Software Upgrades

We’ve had some spectacular recent failures of software upgrades gone wrong. The one that got the most press was a software problem at the FAA that knocked out nationwide flights by corrupting the NOTAM system that transmits real-time information to pilots about flight hazards and airspace restrictions. The FCC said the outage was created when personnel unintentionally deleted files while working to correct synchronization between the live primary database and a backup database.

There seem to regularly be outages of Internet platforms caused by software issues. In the last year, there have been outages at Google, Facebook, Twitter, and dozens of other software platforms. The telecom industry has had plenty of outages caused by similar issues that have knocked out large chunks of the Internet backbone or various data centers. Any of these outages that were software-related have one thing in common – with good software upgrade procedures, the outages likely could have been prevented.

Telephone companies have the longest history working with software upgrades that are capable of knocking out networks. The possibility of big voice outages crept into the industry when we replaced electromechanical switches with electronic switches. The big telephone companies developed software upgrade protocols that were designed to minimize outages due to software upgrades. Even when upgrades went poorly, smart telcos adopted processes for quickly flipping back to the original configuration. The frequency and the size of the software outages we keep seeing today are good indicators that a lot of companies are not following the safe practice that have been around for decades.

One of the first things that anybody that touches a core of a network should understand is that there is no such thing as a casual upgrade or casual maintenance of a mission critical system. It’s obvious there are bad practices in place when one technician can delete or modify a file and cause a major outage – it should be impossible for somebody to have access to casually do that.

The processes for safe software upgrades are well known. They require a lot more discipline than many network engineers want to use – but they are safe. The tried-and-true way to make a software upgrade is as follows:

Have a Project Manager for the Upgrade. It is vital to have one person in charge of the upgrade. They can get assistance in planning and doing the upgrade, but they need to be ready and authorized to react if things don’t go as planned.

Develop a Checklist. There should be a step-by-step checklist of all aspects of the upgrade. Make sure to understand every piece of equipment and software that will be affected by the upgrade. Then, most importantly, develop a step-by-step list of the steps required to perform the upgrade.

Break the Upgrade into Manageable Steps. If possible, the upgrade should be done in stages where progress can be measured and tested after each step.

Establish a Baseline / Establish a Go-Back Process. Establishing a baseline means understanding the current network configuration in detail. It means understanding the exact settings of every piece of software and equipment. Once the baseline is in place there should be a go-back process. This is the process of returning software and hardware to the original configuration if something goes wrong during the upgrade. Ideally, the go-back would be something that can be implemented quickly, and if designed well, can be done in minutes.

Make Sure to have Vendor Support. It’s worth considering having a vendor representative on site for major upgrades, or on alert for minor ones. I have seen clients schedule an upgrade over a holiday, not thinking that the needed expertise at the vendor is probably not going to be available.

Pre-test Every Component before the Cut. Safe practices establish a test lab for a complicated upgrade where the new software and/or hardware is tested first in a lab setting instead of live.

Take Every Upgrade Seriously. I often see companies follow most of the above steps for major upgrades only to see them knock out their network for what they think of as simple upgrades or routine maintenance.

It’s easy to define a bad upgrade process as one where a single technician can unilaterally change files and setting in a mission critical system without going through any of the above processes. Every time there is a bad outage we hear reasons for the outage like a corrupted file or bad hardware – nobody ever admits they were too casual with an upgrade, although that’s probably the real reason for the outage.

Poor Rural Connectivity Costs Lives

The Washington Post wrote an article recently that talked about how poor rural connectivity cost lives during a tornado in Louisiana. Around the country there are now elaborate alerts systems in areas subject to tornados and other dangerous weather events. These alerts have been shown to save lives since they give folks enough time to seek shelter or get out of the path of a storm. I apologize that the article is behind a paywall, but here is the link for anybody who can read it.

This story is not unique, and the same thing plays out whenever a bad storm passes through areas with poor broadband and cellular coverage. In this case, a family was killed by the storm because they didn’t see the storm alerts, and other people were unable to reach them to tell them about the alerts. In this particular case, a husband and wife tried to repeatedly to warn the family about the storm. But their landline connection was terrible, they didn’t have good broadband, and the cellular coverage was inadequate – so nobody was able to reach the family that ultimately got killed by the storm.

I’ve created lists many times of the benefits of rural broadband, but until I read this article, I never thought to say that good broadband saves lives. The government has spent a lot of money creating emergency alert systems for various purposes, including storm warnings. I live in a city, and I get alerts from the City for all sorts of things, including storm alerts. Living in a city means I have the option to receive alerts by text, email, or even an automated voice call – and the alerts reach me.

AT&T has collected billions of federal funding to create the First Responder Network Authority as part of the larger FirstNet effort. AT&T told the Washington Post that it added 60 small cellular sites in Caddo Parish in recent years, where this storm struck. But it’s likely that most of these sites were placed to beef up the network where most folks live and do not extend far in the rural parts of the parish.

My consulting firm administers a lot of broadband surveys every year in rural counties. These surveys are mostly aimed at helping to define areas that have inadequate broadband. But in practically every rural survey we have ever done, we find 30% of more homes saying that they don’t have home cellular coverage – sometimes a much higher percentage.

There are some potential solutions being considered to help solve this problem, but like everything the FCC gets involved in, it’s complicated. The FCC announced a $9 billion 5G fund at the end of 2020 that is aimed at improving rural cellular coverage. The mechanics of that subsidy fund have not yet been announced, and like other broadband initiatives, it seems like FCC wants to see better cellular coverage maps before trying to fund a solution. My first take is that the cellular coverage in the new FCC mapping system is probably in worse shape than the landline broadband maps.

The idea of using federal funds to improve rural cellular coverage is further complicated by the huge amounts of federal funding that are aimed at improving rural broadband. It would be extremely wasteful to give the cellular carriers money to extend fiber networks to rural cell sites when other funding should  be building the same fiber routes. The big funding for rural broadband seems likely to eliminate the need to fund fiber for most rural cell sites. It still makes great sense to provide subsidies to build towers and open rural cell sites because it’s nearly impossible to make a business case for a rural cell tower that only reaches a small number of households.

None of these solutions are going to be fast, so there is no quick fix in the immediate future. But the FCC ought to be able to figure out a way to get solid cellular signals to folks like the ones in Caddo Parish who really need it. But I despair if getting this right means getting the FCC maps right, something I’m doubtful will ever happen.

Lumen’s Fiber Path Forward

Lumen is taking a different path forward than the other big telcos. AT&T continues to build fiber in selected clusters, mostly in cities, rather than concentrate on building entire markets. Frontier, Windstream, and Consolidated are all concentrating on upgrading existing telco DSL networks to fiber.

Lumen has a different path forward. In a recent press release, the company announced a major upgrade to its long-haul fiber routes that cross the country. The company’s main fiber strategy is to beef up the intercity network with plans to add six million miles of fiber to existing fiber routes by 2026. In case you are wondering how there can possibly be six million route miles of fiber in the country – that count is miles of individual fibers. This is a marketing trick that long-haul fiber providers have been using for years to make networks seem gigantic.

The existing Lumen long-haul fiber network came to the company in two acquisitions. The original network came when CenturyLink bought US West, which had earlier merged with Qwest, a major builder of long-haul networks. The network was strengthened when CenturyLink purchased Level 3 Communications.

The original Quest fiber is getting dated in terms of capacity and performance. Much of this fiber was built thirty and forty years ago. While most of the fiber is still functional, fiber glass technology has improved drastically since then. Lumen will be using two low-loss types of fiber from Corning. This newer fiber is far clearer than older fiber and will increase the distance between repeater points while also allowing for using the fastest 400-gigabit electronics today and faster electronics later.

Earlier this year, Lumen announced it is improving its Ethernet architecture in forty cities this year. This means upgrading local networks to major customers to be able to provide speeds up to 30 gigabits. While this upgrade will mostly benefit business customers, this also will improve the local fiber backbone in these cities to 100 gigabits, which should improve performance for all broadband customers.

Lumen is also pursuing a last-mile fiber expansion. In August, the company announced fiber expansion plans in Denver, Minneapolis, and Seattle. The company had a target for this year to pass one million locations with fiber but has fallen a little behind due to supply chain and logistics.

Unlike the other telcos, Lumen hasn’t been talking much about the upcoming rural grant funding. This doesn’t mean the company might not pursue those opportunities since rural fiber expansion creates monopolies. But major residential expansion does not seem to be a key part of the Lumen plan, at least compared to plans for companies like Frontier, which says it plans to pass 12 million homes with fiber.

Another big unknown is if the company is still trying to sell any of its remaining copper networks like it did with sale of the twenty easternmost states to Apollo Global Management. It would be a more drastic affair to liquidate last-mile customers in the states where US West was formally the Bell company incumbent provider.

Any more sales of last-mile networks would be an interesting step where Lumen would be retracting to be a large business ISP. The company already had a sizable share of the business market that got bolstered by the acquisition of Level 3.

Lumen shares one characteristic with all of the big telcos in that it knows it must reinvent itself. After many years of no activity, Verizon is expanding FiOS again while also pushing a nationwide FWA network. AT&T is fully committed to building last-mile fiber networks and continues to add millions of new passings per year. The smaller telcos like Frontier and Windstream have clearly decided they must build fiber or fade away. Lumen is still the big wild card that hasn’t fully committed to any single expansion strategy and is pursuing different paths. From folks who track what the big ISPs are doing, if nothing else, this makes them the most interesting company to watch.

Counting Broadband Locations

All of the discussion of the FCC maps lately made me start thinking about broadband connections. I realized that many of my clients are providing a lot of broadband connections that are not being considered by the FCC maps. That led me to think that the old definition of a broadband passing is quickly growing obsolete and that the FCC mapping effort is missing the way that America really uses broadband today.

Let me provide some real-life examples of broadband connections provided by my clients that are not being considered in the FCC mapping:

  • Broadband connections to farm irrigation systems.
  • Broadband to oil wells and mining locations.
  • Broadband to wind turbines and solar farms.
  • Fiber connections to small cell sites.
  • Broadband electric substations. I have several electric company clients that are in the process of extending broadband to a huge number of additional field assets like smart transformers and reclosers.
  • Broadband to water pumps and other assets that control water and sewer systems.
  • Broadband to grain elevators, corn dryers, and other locations associated with processing or storing crops.
  • I’m working with several clients who are extending broadband for smart-city applications like smart streetlights, smart parking, and smart traffic lights.
  • Broadband to smart billboards and smart road signs.
  • Broadband for train yards and train switching hubs.
  • There are many other examples, and this was just a quick list that came to mind.

The various locations described above have one thing in common. Most are locations that don’t have a 911 street address. As such, these locations are not being considered when trying to determine the national need for broadband.

A lot of these locations are rural in nature – places like grain elevators, mines, oil wells, irrigation systems, wind turbines, and others. In rural areas, these locations are a key part of the economy, and in many places are unserved or underserved.

We are putting a huge amount of national energy into counting the number of homes and businesses that have or don’t have broadband. In doing so, we have deliberately limited the definition of a business to a place with a brick-and-mortar building and a 911 address. But the locations above are often some of the most important parts of the local economy.

I’ve read predictions that say in a few decades there will be far more broadband connections to devices than to people, and that rings true to me. I look around at the multiple devices in my home that use WiFi, and it’s not hard to envision that over time we will connect more and more locations and devices to broadband.

After a decade of talking about the inadequate FCC broadband maps, we finally decided to throw money at the issue and devise new maps. But in the decade it took to move forward, we’ve developed multiple non-traditional uses for broadband, a trend that is likely to expand. If we are really trying to define our national need for broadband, we need to somehow make sure that the locations that drive the economy are connected to broadband. And the only way to do that is to count these locations and put them on the broadband map, so somebody tries to serve them. The current maps are doing a disservice by ignoring the huge number of these non-traditional broadband connections.

When Fiber Construction Goes Wrong

The Common Ground Alliance (CGA) recently issued its 2021 Damage Information Reporting Tool (DIRT). CGA was founded in 2000 and is an association of companies that engage in underground construction. Members include excavators, locators, road builders, electric companies, telecom companies, oil and gas distribution and transmission, railroads, One Call, municipal public works, equipment manufacturers, State regulators, insurance companies, emergency services, and engineering/design firms. The goal of the CGA is to highlight and reduce damages done to all utilities when working underground.

Here are the current trends discussed in the DIRT report:

  • CGA used statistical models that show that there has been a plateau, or perhaps a tiny decrease in the frequency of damages caused by underground construction since 2019.
  • Calls to locating services increased by 8% in 2021, which CGA believes is a precursor to the construction that will result from the Infrastructure Investment and Jobs Act. In past years the frequency of damages has correlated to the overall volume of construction work, so the expectation is that damages due to construction will increase over the next few years.
  • The vast majority of damages (80%) are caused by professional excavators. The rest are caused by individuals and farmers, municipalities, and utilities.
  • The most common source of damages (almost half) is work done by a backhoe.
  • The most commonly damaged infrastructure is natural gas and telecom infrastructure.
  • There were 81,000 damage reports to natural gas systems and 75,000 reports of damage to telecom networks.
  • CGA notes 25 different causes of damage, with six causes accounting for 76% of damage reports.
  • The most prevalent cause of damage (25%) occurs when work is done without first calling to locate other utilities. CGA research says that professional awareness of the need for locating services is high, but 60% of all damages due to no notification are attributed to professional excavators.
  • The next two primary reasons for damages are excavators failing to pothole, failing to maintain sufficient clearance between digging equipment and buried facilities, and facilities not being marked or being marked inaccurately due to locator error and/or incorrect facility records/maps.
  • Nearly a quarter of damages reported by excavators resulted in downtime, so better practices would be a time and money saver.
  • CGA gathered over 217,000 reports of damage in the U.S. in 2021 and another 11,000 in Canada.

This report is an interesting reminder that good work practices can make a big difference in avoiding damage. Fiber construction projects are often brought to a screeching halt when damage is done to existing utilities, particularly gas and water lines. This is well worth reading for anybody associated with construction.