Jargon

There is a good chance that if you are reading this blog that you are well versed in a fair amount of telecom industry jargon. I do my best in writing this blog to stay from as much jargon as possible, but it’s not easy. Jargon is shorthand, and it lets folks already in the industry talk about topics without having to explain basic concepts every time they arise.

Every segment of the industry has its own jargon. Wireless folks know what’s meant when a colleague talks about MIMO, QAM, and RAN. Fiber folks understand what is meant by OLT, jitter, and backscattering. Cable company folk can talk about DAA, CMTS, and DOCSIS. The folks that finance broadband networks talk about yield, basis points, and acid test. Regulators all know what is meant by NARUC, NOI, and CPNI.

But I challenge any industry folks reading this blog – go look out your front door and ask yourself how many of your neighbors know what DOCSIS or XGS-PON means. How many know what you mean if you refer to NRTC or WISPA – or even that those are shorthand for organizations?

It’s hard to avoid using jargon. It’s nearly impossible to talk to a network engineer about the performance of a network without going quickly into jargon. It’s challenging to read an FCC order if you don’t know the regulatory jargon. You better understand the banker jargon before agreeing to new loan for a network.

But jargon can quickly get in the way when we want to communicate with somebody who doesn’t know our shorthand. As an example, I recently sat through a presentation by a water engineer with a City Council. This engineer used jargon throughout, and I could tell that the elected officials weren’t following the nuances of what he was talking about. I would hope that after the presentation that somebody explained the presentation to the elected officials – but since this engineer couldn’t describe his concept in plain English, most of the points he was making went over everybody’s heads.

Most folks assume everybody in the industry understands their jargon, but I know this isn’t so. Just listen to the way that a field technician and a customer service representative answer the same question from a customer – they are likely to use very different words.

I try my best to keep jargon down in this blog, but sometimes it’s almost impossible to do. It’s hard to write a 700-word piece and make a point if you have to explain each technical concept. I have to laugh when I get comments on a blog from a technician who is sure that I don’t know what I’m talking about when I try to summarize technical terms into plain language and use analogies to explain a concept. I can just hear them sputtering that I’m not being precise enough.

But this blog is a reminder to industry folks that we need to take a step back from jargon if we want folks to understand us. I can promise you that in a meeting of telecom folks that there will be attendees who don’t know what some of the jargon means, but are too embarrassed to say so. Jargon can be a total roadblock when trying to explain broadband to non-industry folks.

I had a college English teacher that told me something that has always stuck with me. She said that a good writer should be able to make any concept understandable to their grandmother. This doesn’t mean you have to write or speak without jargon if your target audience are folks who understand the jargon. But it means that communication can easily fail if you can’t explain things in a way that a listener will understand.

Replacing Poles

When folks ask me for an estimate of the cost of building aerial fiber, I always say that the cost is dependent upon the amount of required make-ready needed. Make-ready is well-named – it’s any work that must be done on poles to be ready to string the new fiber.

One of the most expensive aspects of make-ready comes from having to replace existing poles. Poles need to be replaced before adding a new fiber line for several reasons:

  • The original pole is too short, and there is not space to add another wire without upgrading to a taller pole. National electric standards require specific distances between different wires for technician safety when working on a pole.
  • It’s possible that the new wire will add enough expected wind resistance during storms that the existing pole is not strong enough to take on an additional wire.
  • One of the most common reasons for replacing poles is that the poles are worn out and won’t last much longer. That’s what the rest of the blog discusses.

Poles don’t last forever. The average wooden utility pole has an expected life of 45 to 50 years. This can differ by the locality, with poles lasting longer in the desert where there are no storms and having a shorter life in more challenging environments. It’s easy to think of poles as being strong and hard to damage, but the forces of nature can create a lot of stress on a pole. The biggest stress on most poles comes from the cumulative effect of heavy winds or ice pulling on the wires and attachments.

There are a lot of reasons why poles fails:

  • Although most poles are usually made of rot-resistant wood, the protection eventually wears off, and poles can decay. This can be made worse if vegetation has been allowed to grow onto a pole.
  • Using a pole differently than the way it was designed is common. A pole might have been rated to carry utility wires but over time got loaded with extra attachments like electric transformers, streetlights, or cellular electronics.
  • The soil around the base of a pole can change over the decades. The area may now be subject to flooding and erosion that wasn’t anticipated when the pole was built.
  • Somebody might have removed a guide wire that was supporting the pole and not replaced it.
  • A pole may have been hit by a car, but not badly enough to be replaced.

ISPs complain when saddled with the full cost of pole replacement. Many of the issues described above should more rightfully be borne by the pole owner. But the federal and most state make-ready rules put the entire cost burden of a pole replacement on the new attacher. It is clearly not fair to make a new attacher pay the full cost to replace a pole that was already in less than ideal condition.

It may seem to the general public that poles are just stuck into the ground. But if you’ve ever watched a new pole being placed, you’ll know that the process can be complex. The design of any new pole must account for all of the anticipated stresses the pole will have to endure. This includes the weight of the wires in a windstorm, ice accumulation, soil composition, the quality of neighboring poles, the spacing between poles (the greater the spacing, the more weight and wind resistance), and if the pole is standalone or to be guyed (anchored to the ground with several strong supporting cables).

Most engineers estimate that a generic aerial construction project will require replacing around 10% of the poles. It’s a pleasant surprise when the percentage is smaller but it can be a real sticker shock if a lot of poles must be replaced. I’ve seen projects where an electric company has neglected maintenance and most of the poles were inadequate.

The right question to ask is not how much it costs to build a mile of fiber. The better question to ask is how good are the poles?

Converged Networks

I’ve been reading and thinking about converged networks – networks that are enabled to tackle multiple market segments. The best example of this is the largest cable companies that are using their residential last-mile broadband networks to support the cellular business.

The cellular business is a perfect fit for a cable company. They already have fiber deep into every neighborhood, which makes it easy to strategically locate small cell sites without building additional fiber. The big cable companies have put a lot of effort into WiFi which can save money by capturing a lot of cellular backhaul traffic from customer phones.

Having the ability to leverage the existing network also gives cable companies a lot of flexibility. They can continue to buy wholesale cellular minutes in areas where the cell traffic volume is light and use their own cellular network where customer usage is high. This is a cost advantage over the cellular companies that must provide their networks everywhere.

It’s an interesting dynamic. I think the cable companies got into the cellular business as a way to increase customer stickiness – meaning making it harder for customers to leave them. The cable companies will only sell cellular to customers who buy their broadband, meaning that a customer that wants a new ISP must also change to a new cellular provider. But now that cable companies are gathering a mass of customers, I have to think they are now looking at cellular as a big profit opportunity.

To a lesser degree, large cellular companies are building a converged network when they are using excess capacity on the cellular network to provide FWA home broadband. This has obviously been a winning strategy in the last year when Verizon and T-Mobile were the only two ISPs with big growth.

But as I look at the long-term outlook for FWA, this doesn’t seem like as strong of a converged strategy as what the cable companies are doing. To me, the difference is in the capability of the two networks. A cable company’s last-mile network can absorb cellular backhaul from customers with barely a blip in network performance. But the same can’t be said for cell sites. It’s far easier for cell sites to reach capacity, and cellular companies have made it clear that they will prioritize cellular data over FWA broadband performance. Maybe cellular carriers can solve this problem by eventually fully implementing the 5G specifications. But for now, cable company networks can handle convergence much more easily than cellular networks.

I have been wondering why fiber providers have not made the same push for convergence. The one exception might be Verizon, which has said in recent years that it now considers all arms of its business when building fiber assets. In the past, the company treated its fiber Fios business, the cellular business, and the CLEC business as arms-length businesses. From what I can tell, Verizon is still not as converged into what the cable companies are heading for – but there might be a lot more of that going on behind the scenes that we don’t know about.

I’m surprised that nobody has tried to integrate the cellular business for small fiber providers. There is a pretty decent list of fiber providers today that have between 100,000 and 1 million customers – and most of them are growing rapidly. It would be a major challenge for a single ISP with a few hundred thousand customers to launch the same kind of MVNO cellular operation that has been done by Comcast and Charter. But it seems like there ought to be a business plan for fiber ISPs to collectively tackle the cellular business. A last-mile fiber company can bring all of the same benefits to an integrated cellular business as the cable companies and are only lacking economy of scale.

I can think of a few reasons nobody has made this work. Taking time to consider cellular is a major distraction for a fiber ISP that is building fiber passings as quickly as possible. There is also getting the many mid-sized fiber providers to trust each other enough to be partners. But at some point in the future, it’s hard to think that somebody won’t figure this out.

If fiber ISPs enter the cellular business, broadband becomes a truly converged market where cable companies, cellular companies, and independent fiber providers compete with the same suite of products. I know that’s what the public wants because it breaks some of the monopolies and increases choice. My crystal ball says we will get there – I’m just fuzzy about how long it will take.

Net Neutrality Again?

There is an interesting recent discussion in Europe about net neutrality that has relevance to the U.S. broadband market. The European Commission that oversees telecom and broadband has started taking comments on a proposal to force content generators like Netflix to pay fees to ISPs for using the Internet. I’ve seen this same idea circulating here from time to time, and in fact, this was one of the issues that convinced the FCC to first implement net neutrality.

Netflix generates less than 10% of the broadband traffic in Europe and European ISPs think that Netflix should pay a substantial fee for using the Internet network. Europe looks a lot like the U.S., and Netflix, Meta, Amazon, Google, Apple, and Microsoft generate most of the traffic there. Online video accounts for 65 percent of all traffic on the web. Netflix argues that the amount of video on the web will continue to climb and that any fees charged to video providers will eventually be applied to a wider range of content providers.

It’s an interesting topic that can be considered from different perspectives. First, companies like Netflix already spend a lot of money to use the network today. Just like in the U.S., Netflix has built or purchased transport to allow local peering. Netflix claims to be providing 18,000 local servers around the world in 175 countries to move its video signals closer to ISP networks. This relieves a lot of volume on the web core and also improves the quality of Netflix content. The same is true for other content providers, and in the U.S., there are a lot of local peering points that have been created by Google, Meta, and others.

Netflix makes the point that the big ISPs in Europe are already profitable and the ISPs would simply pocket any new revenue stream. They are highly skeptical that any benefit to ISPs from charging Netflix would be passed on to Netflix customers through lower broadband prices.

When net neutrality was discussed in the U.S., there was a good argument made by content providers that subscribers are already paying for end-to-end use of the Internet in the monthly fees paid to ISPs. Charging the content providers for using the Internet would amount to billing twice for the same traffic. Since the original net neutrality discussion here, U.S. broadband prices charged by cable companies have increased significantly, making it even more true that customers are supporting the Internet.

Another way to think about the issue is that video is the service that drives a lot of households to buy broadband. Without Netflix and the other online video content providers, there would not be nearly as many broadband users, and ISPs would not have such a large market share. There is a truism in the industry that says you shouldn’t build a broadband network solely to provide entertainment to customers, but there is no denying that there are a lot of homes that wouldn’t buy broadband if it wasn’t for video and social media. Not everybody works from home or has students that need broadband for schoolwork.

There are several reasons why I am highlighting this European issue. Topics that become issues in Europe invariably are raised as issues here, and vice versa. If American ISPs see that European ISPs have been able to extract payments from Netflix, our ISPs will immediately start making the same demands here.

The other interesting aspect of this particular argument is that it’s something that we already solved once in the past when the FCC passed net neutrality rules. But the Ajit Pai FCC tossed out those rules, so it was inevitable that net neutrality topics would eventually come to life here again.

The net neutrality issue is one of the most interesting topics from a regulatory perspective. Even after Ajit Pai tossed out the net neutrality rules, American ISPs didn’t change their behavior. There are two possible reasons for this. I think ISPs have tried to keep a cap on behavior that would induce regulators to try to put net neutrality back in place again. It seems that perhaps the mere threat of reintroducing net neutrality has kept ISPs in check. However, I find it likely that ISPS are now feeling braver after having squashed the proposed fifth FCC Commissioner.

The other reason is that California put its own version of net neutrality rules in place. This has slowly made its way through the courts and is now in effect. ISPs might not be willing to take on California, because to do so might invite many other states to pass different version of the same rules. As much as ISPs hate the idea of federal regulations, they don’t like, the= biggest fear is a hodgepodge of different regulations in states.

More Mapping Drama

As if the federal mapping process needed more drama, Senator Jacky Rosen (Dem-Nevada) and John Thune (Rep-South Dakota) have introduced bill S.1162 that would “ensure that broadband maps are accurate before funds are allocated under the Broadband Equity, Access, and Deployment Program based on those maps”.

If this law is enacted, the distribution of most of the BEAD grant funds to States would be delayed by at least six months, probably longer. The NTIA has already said that it intends to announce the allocation of the $42.5 billion in grants to the states on June 30. The funds are supposed to be allocated using the best count of unserved and underserved locations in each state on that date. Unserved locations are those that can’t buy broadband of at least 25/3 Mbps. Underserved locations are those unable to buy broadband with speeds of at least 100/20 Mbps.

To add to the story, FCC Commissioner Jessica Rosenworcel recently announced that the FCC has largely completed the broadband map updates. That announcement surprised the folks in the industry who have been working with the map data, since everybody I talk to is still seeing a lot of inaccuracies in the maps.

To the FCC’s credit, its vendor CostQuest has been processing thousands of individual challenges to the maps daily and has addressed 600 bulk challenges that have been filed by States, counties, and other local government entities. In making the announcement, Rosenworcel said that the new map has added over one million new locations to the broadband map – homes and businesses that were missed in the creation of the first version of the map last fall.

But the FCC map has two important components that must be correct for the overall maps to be correct. The first is the mapping fabric that is supposed to identify every location in the country that is a potential broadband customer. I view this as a nearly impossible task. The US Census spends many billions every ten years to identify the addresses of residents and businesses in the country. CostQuest tried to duplicate the same thing on a much smaller budget and with the time pressure of the maps being used to allocate these grants. It’s challenging to count potential broadband customers. I wrote a blog last year that outlined a few of the dozens of issues that must be addressed to get an accurate map. It’s hard to think that CostQuest somehow figured out all of these complicated questions in the last six months.

Even if the fabric is much improved, the more important issue is that the accuracy of the broadband map is reliant on two issues that are reported by ISPs – the coverage area where an ISP should be able to connect a new customer within ten days of a request, and the broadband speeds that are available to a home or business at each location.

ISPs are pretty much free to claim whatever they want. While there has been a lot of work done to challenge the fabric and the location of possible customers – it’s a lot harder to challenge the coverage claims of specific ISPs. A true challenge would require many millions of individual challenges about the broadband that is available at each home.

Just consider my own home. The national broadband map says there are ten ISPs available at my address. Several I’ve never heard of, and I’m willing to bet that at least a few of them can’t serve me – but since I’m already buying broadband from an ISP, I can’t think of any reason that would lead me to challenge the claims of the ISPs I’m not using. The FCC thinks that the challenge process will somehow fix the coverage issue – I can’t imagine that more than a tiny fraction of folks are ever going to care enough to go through the FCC map challenge process – or even know that the broadband map exists.

The FCC mapping has also not yet figured out how to come to grips with broadband coverage claimed by wireless ISPs. It’s not hard looking through the FCC data to find numerous WISPs that claim large coverage areas. In real life, the availability of a wireless connection is complicated. The FCC reporting is in the process of requiring wireless carriers to report using a ‘heat map’ that shows the strength of the wireless signal at various distances from each individual radio. But even these heat maps won’t tell the full story. WISPs are sometimes able to find ways to serve customers that are not within easy reach of a tower. But just like with cellphone coverage, there are usually plenty of dead zones around a radio that can’t be reached but that will still be claimed on a heat map – heat maps are nothing more than a rough approximation of actual coverage. It’s hard to imagine that wireless coverage areas will ever be fully accurate.

DSL coverage over telephone copper is equally impossible to map correctly, and there are still places where DSL is claimed but which can’t be served.

Broadband speeds are even harder to challenge. Under the FCC mapping rules, ISPs are allowed to claim marketing speeds. If an ISP markets broadband as capable of 100/20 Mbps, they can claim that speed on the broadband map. It doesn’t matter if the actual broadband delivered is only a fraction of that speed. There are so many factors that affect broadband speeds that the maps will never accurately depict the speeds folks can really buy. It’s amazingly disingenuous for the FCC to say the maps are accurate. The best we could ever hope for is that the maps will be better if, and only if ISPs scrupulously follow the reporting rules – but nobody thinks that is going to happen.

I understand the frustration of the Senators who are suggesting this legislation. But I also think that we’ll never get an accurate set of maps. Don’t forget that Congress created the requirement to use the maps to allocate the BEAD grant dollars. Grant funding could have been done in other ways that didn’t relay on the maps. I don’t think it’s going to make much difference if we delay six months, a year, or four years – the maps are going to remain consistently inconsistent.

Creating Brand Awareness

ISPs are entering new broadband markets at an unprecedented rate. There have always been ISPs expanding into new markets, but I’m seeing ISP expansion at a far greater rate than ever before. A large percentage of new ISPs are entering markets where they have never served and are operating under brand names that may not be familiar to potential customers. Today I want to talk about some basic Marketing 101 concepts that any ISP entering a new market should be aware of.

It’s easy for somebody bringing fiber to a market to assume that the folks in a new market will flock to the new opportunity – but market research has shown that this is not the case. No matter how much folks might want a better broadband alternative, they also need to be convinced that they can trust the new ISP.

Local ISPs that already operate near a new market have some advantage if folks in the community already know their name. But I think ISPs often overestimate how well they are known in a nearby community – it’s likely that a significant percentage of a community will not know them even if they’ve operated in the region for many years. People tend not to pay attention to brand names for companies and products that are not available to them.

Nielsen has been doing market research for many years and offers up some interesting statistics about the effectiveness of advertising aimed at building a brand identity. The two characteristics that Nielsen says are mandatory for somebody entering a new market are baseline brand awareness and brand recall. These metrics measure how well the residents of any community recognize a brand name (brand awareness) and know what the company behind the brand name sells (brand recall). One other important characteristics is how many folks in a community have a negative opinion of the brand name. This last characteristic defines the uphill battle that the big telcos must overcome in rural markets where a large percentage of folks have an ingrained negative opinion of them.

One of the most interesting statistics from Nielsen is a measure of the effectiveness of brand advertising. In a market where less than half of residents have heard of a brand name, a good brand advertisement can raise awareness of the brand by 8% for those folks that hear or read the ad. The effectiveness of brand advertising decreases with the familiarity of the brand. For instance, if 75% of folks in a community already know a brand, then there is only a 3% bump in positive brand building among those viewing an ad.

These statistics point out a few things that an ISP entering a market should consider. First, how well do the folks in the community already know you? When residents hear your brand name, do they know you are an ISP and think of broadband? Do the residents have a positive or negative opinion of your company? These are things that can be measured in statistically valid surveys.

But many ISPs take the conservative approach that folks do not know their brand name and reputation. Even in communities where a significant number of residents might know them, they feel that they need to build brand name awareness among folks that do not know them.

The relatively low success rate of a single brand-building ad should remind an ISP that it has to find multiple ways to get the word out about them entering the market. The percentage of folks that will see any advertisement is small, and the incremental impact of brand-building only applies to those that see your ads. This means that a new ISP should leave no stone unturned. On top of the normal ad channels, ISPs should work hard to get stories about entering the market in the newspaper, on social media, and local TV. New ISPs shouldn’t miss opportunities to have a presence at any local events where lots of people gather.

The bottom line is that an ISP should assume that most people in a new market don’t know who they are. Even if they do, they probably won’t have an opinion if you are a good or bad ISP. In fact, if the community has experience with poorly performing incumbents, a new ISP should assume that folks will be skeptical of all ISPs.

It’s too easy to bring a fiber network to a new market and assume that folks will automatically flock to it. I’ve known ISPs who are shocked that they don’t get the flood of new customers they expected. ISPs need to take a page from Marketing 101 and build awareness of their brand name and plant the seed that it is not the same as the incumbents. ISPs that can make that connection with the public usually fare well.

Was That Fiber Construction?

One way that I know that there is a lot of fiber construction occurring is that many of the people I talk to tell me that they’ve seen fiber construction in their neighborhood. I always ask about the type of construction they are seeing, and many folks can’t define it. I thought today I’d talk briefly about the primary methods of fiber construction.

Aerial Fiber. The aerial fiber construction process starts with steps most folks don’t recognize as being fiber-related. Technicians will use cherry pickers or climb poles for make-ready work that prepares the poles to accept new fiber. There might even be some poles replaced, but most people wouldn’t associate that with fiber construction. The construction process of hanging the fiber can be hard to distinguish from the process of adding wires for other utilities. There are generally some cherry pickers and a vehicle involved that holds a reel of fiber wire. The aerial construction process can move quickly after the poles have been properly prepared, and many folks won’t even realize that fiber has been added along their street.

Trenching. Trenching fiber is the best-named construction method because it exactly describes the construction process. With trenching, a construction crew will open a ditch with a backhoe and lay conduit or fiber into the open hole. Trenching is usually chosen in two circumstances. First, it is often the least expensive way to bury conduit along stretches of a road that don’t have impediments like driveways. When a contractor builds fiber in a whole city, trenching might be used along streets that have not yet been developed and that don’t yet have sidewalks. Trenching is usually the preferred construction method when putting fiber into a new subdivision – the ditches are excavated, and conduit is placed before the streets are paved.

Plowing. Cable plowing is a construction method that uses a heavy vehicle called a cable plow to directly bury fiber into the ground as the plow drives along the right-of-way. Fiber plowing is done almost exclusively when burying fiber cable along a route where the fiber will be placed in unpaved rights-of-way, such as along a country road. The right-of-way must be open and not wooded to allow access to the cable plow.

A cable plow is an unmistakable piece of equipment. It’s a bulldozer-sized vehicle that holds a large spool of fiber. It’s unmistakable to see a cable plow because folks will inevitably wonder what the contraption is moving along a country road. But the plowing work can proceed quickly, and the more noticeable crews are the ones boring underneath driveways and intersections along the plowing construction route.

Boring. Also called horizontal boring, trenchless digging, or directional drilling, this is a construction method that uses drills to push or pull rods horizontally underground to create a temporary hole large enough to accommodate a conduit. This is the technique used to place fiber under paved streets, driveways, and sidewalks.

Boring rigs come in a variety of sizes based on the length of the expected drill path. Small boring rigs might be mounted on the back of a truck. Large boring rigs are standalone heavy equipment that are often mounted on treads (like a tank) instead of wheels to accommodate a wide variety of terrain. It’s fairly easy to identify a fiber boring operation because there will be vehicles of all sorts around the area and usually large reels of brightly colored conduit nearby. The chances are that if you see fiber construction in a town, it is using boring.

Microtrenching. This construction process is unmistakable. A heavy piece of equipment that contains a giant saw cuts a narrow trench in the street. The saw is usually followed by trucks that haul away the removed street materials. The cutting process is loud and draws everybody’s attention. Microtrenching can be finished in a day in ideal circumstances where the hole is cut, side connections are made with a high-pressure water drill to get fiber under the streets and sidewalks, and the narrow trench is refilled and capped.

Is Broadband Regulation Dead?

I ask this question after Gigi Sohn recently withdrew her name from consideration as an FCC Commissioner. It’s been obvious for a long time that the Senate was never going to approve her nomination. Some Senators tried to blame their reluctance to approve on Sohn’s history as an advocate for the public over big corporations.

But the objections to Sohn were all the kinds of smokescreens that politicians use to not admit the real reason they opposed the nomination. Gigi Sohn is not going to be the next Commissioner because she is in favor of regulating broadband and the public airwaves. The big ISPs and the large broadcasting companies (some companies which are both) have been lobbying hard against the Sohn nomination since it was first announced. These giant corporations don’t want a third Democratic Commissioner who is pro-regulation.

In the past, the party that held the White House was able to nominate regulators to the FCC and other regulatory agencies that reflected the philosophies of their political party. That’s been a given in Washington DC, and agencies like the FCC have bounced back and forth between different concepts of what it means to regulate according to which party controlled the White House.

But I think the failure to approve Sohn breaks the historical convention that lets the political party in power decide who to add as regulators. I predict this will not end with this failed nomination. Unless the Senate gets a larger majority for one of the parties, I have a hard time seeing any Senate that is going to approve a fifth FCC Commissioner. If Republicans win the next presidential race, their nominee for the fifth Commissioner slot will also likely have no chance of getting approved.

The primary reason for this is that votes for an FCC Commissioner are no longer purely along party lines. The large ISPs and broadcasters make huge contributions to Senators for the very purpose of influencing this kind of issue. That’s not to say that there will never be a fifth Commissioner, but rejecting this nomination means it’s going to be a lot harder in the future to seat FCC Commissioners who embrace the position of the political party in power, like was done by Ajit Pai and likely would have been done by Gigi Sohn.

I think we’re now seeing the textbook example of regulatory capture. That’s an economic principle that describes a situation where regulatory agencies are dominated by the industries they are supposed to be regulating. Economic theory says that it’s necessary to regulate any industry where a handful of large players control the market. Good regulation is not opposed to the large corporations being regulated but should strike a balance between what’s good for the industry and what’s good for the public. In a perfectly regulated industry, both the industry and the public should be miffed at regulators for not fully supporting their issues.

The concept of regulatory capture was proposed in the 1970s by George Stigler, a Nobel prize-winning economist. He outlined the characteristics of regulatory capture that describes the broadband industry to a tee.

  • Regulated industries devote a large budget to influence regulators at the federal, state, and local levels. It’s typical that citizens don’t have the wherewithal to effectively lobby the public’s side of issues.
  • Regulators tend to come from the regulated industry, and they tend to take advantage of the revolving door to return to industry at the end of their stint as a regulator.
  • In the extreme cases of regulatory capture, the incumbents are deregulated from any onerous regulations while new market entrants must jump through high hoops.

The FCC is a textbook example of a captured regulator. The FCC under Ajit Pai went so far as to deregulate broadband and to wash the FCC’s hands of broadband as much as possible by theoretically passing the little remaining regulation to the FTC. It’s hard to imagine an FCC more under the sway of the broadband industry than the last one.

There is no real fix for regulatory capture other than a loud public outcry to bring back strong regulation. But that’s never going to happen when regulatory capture is so complete so that it’s impossible to even seat a fifth Commissioner.

Higher Prices for Rural Broadband

Innovative Systems of Mitchell, South Dakota, commissioned a survey of broadband and bundled rates paid by rural residents. This is the eighth year of the survey. The 2022 survey focused on zip codes that are completely rural in order to find out about rural rates. The results come from surveys administered to 841 rural residents.

The study showed that the average rate paid for rural broadband increased from $68 per month in 2021 to $71 in 2022. The average bill for customers that bundle broadband with video increased from $114 in 2021 to $121 in 2022.

I was not surprised to see rural rates climbing because rates seem to be moving upward everywhere. For example, in urban markets, all of the major cable companies have had rate increases. A lot of other ISPs have followed suit in an attempt to keep ahead of inflation.

But nationwide averages don’t likely tell the story everywhere in rural markets. It has been my experience, having worked in dozens of counties in the last few years, that rural rates generally have a lot more geographic variance than urban rates. In any given rural county, there generally is only a tiny handful of ISPs with significant market penetration, and the rates of those specific ISPs might be significantly different than even in a neighboring county.

The rural broadband landscape is incredibly diverse. There are counties where the largest ISP is still the incumbent telco with DSL. Many counties have WISPs providing broadband – but I see counties with no WISPs and others with a half dozen. There are households using both high-orbit satellites like Viasat, but also Starlink. There are often a number of households using cellular hotspots – and now I’m starting to see some counties where the newer and faster FWA broadband is making inroads. Some lucky counties now have rural fiber – and some have a lot of it.

This all means having drastically varying average broadband costs from county to county. For example, I would expect a county where most subscribers use satellite or hotspots to have the highest average rates – because those two options can be incredibly expensive for a home that uses even a modest amount of broadband in a month. Viasat, Hughesnet, and the various hotspot products all have meager data caps, and I’ve talked to homes that regularly had broadband bills during the pandemic of $500 or more per month, with minimum bills easily at over $100 for satellite. Starlink recently raised its rate to $120, with some customers having to pay $130.

At the other end of the price scale are some cooperatives that offer low rates on fiber. There are a few coops that sell a gigabit connection for $50 or less.

WISP prices are all over the board, with some WISPs with $60 rates while others are closer to $100.

DSL rates from the large incumbent telcos have held steady for many years, although rates have recently changed. For example, CenturyLink increased DSL rates in 2022 after many years with the same rates.

Probably the most generally affordable rural rates are FWA wireless from T-Mobile, Verizon, and some smaller cellular carriers. But I still haven’t encountered a rural county where this is a widely available product. Many rural counties still have not seen the upgrades needed for FWA. But even in counties where FWA has been deployed, the product is only available for folks who live fairly close to cell towers – and much of rural America has a real dearth of cell towers.

I’ve studied counties in the last few years where average prices were in the $60s and others where the average was closer to $90. My surveys have produced results similar to this one where I’m finding the overall average rates to be a little closer to $75 per month – but this survey has a much wider sample of communities

Upgrading to faster networks from grants will not necessarily bring better rates. I’ve already seen a few RDOF winners with gigabit rates in the $60 range, while a few others have starting rates close to $100. I think rural prices are always going to be very dependent on the local ISPs that take up shop in a given county.

BEAD Grants – File Early or Wait?

Several states have already announced that there will be multiple rounds of BEAD grant applications. This makes a lot of sense for states that will be receiving a significant amount of BEAD funding. It’s a daunting prospect to try to meet all of the goals established by the NTIA in a single round of grants. Perhaps the biggest challenge will be making sure that as many unserved and underserved homes find a broadband solution.

One issue of concern for State Broadband Offices has to be what they should do if nobody asks for grant funding from some parts of a state. This is not hard for me to envision. I’ve been working with a lot of parts of the country where a 75% grant might not be sufficient – particularly when considering the extra costs that BEAD adds to building a broadband solution. For example, I’ve looked at a few mountainous and remote communities where the grants will need to be nearly 100% to get an ISP to want to bring a fiber network – and these are communities that don’t look to be reasonably served by a wireless solution. I think ISPs and State Broadband Offices will eventually need to negotiate to bring broadband to the highest cost places.

There are also a lot of small pockets of homes everywhere that do not neatly fit into any grant application. One of the reasons for many of these pockets was the FCC’s RDOF subsidies that created service areas often described as swiss cheese. But there are also small pockets of customers everywhere for more natural reasons, such as being located in an isolated place not close to other customers. I believe States are going to have to get very creative if they really want to get all of these tiny pockets served because ISPs are not likely to go through the complicated BEAD grant process to serve tiny, isolated areas.

This is further complicated by the legislative and NTIA rules that say that States must bring broadband to unserved locations before funding other places. Nobody yet understands what that will look like in practice, but States seem to have a mandate to make sure they find a solution for the most challenging locations before spending all of the BEAD grant funding elsewhere. That alone sounds like a good reason to expect multiple rounds of grants.

Communities have a different issue to consider. Many communities have a strong preference for the ISP(s) they want to serve them. Some favor local ISPs they’ve known and trusted for many years. Others are excited to see electric coops considering becoming the broadband provider. Some communities have a strong preference for fiber. Some local governments communities have already heard from residents that they do not want the incumbent telco that neglected them for decades to get more funding – they want somebody different.

It’s my opinion that communities with a strong preference for specific ISPs need to work with their chosen ISPs to be part of the first round of BEAD grant filings. Otherwise, they take a big chance that somebody they don’t want will file early and win the first round of BEAD grant filings. I’ve always thought it is likely that State Grant Offices will give extra consideration to ISPs that have the strong support of local officials. Many jurisdictions are making small local grants to demonstrate their support for a specific set of ISPs. But having a strong preference for an ISP partner won’t mean much if some other ISP beats files grants first.

There is a lot of speculation about the degree to which the big telcos and firms that are backed by venture capital money will be chasing the $42.5 billion in grants. It’s probably fair to assume these big companies will file every grant they are interested in during the first grant window of opportunity.

This means that it’s already time to talk to ISPs. I’m working with a number of counties that are already reaching out to local ISPs to understand their intentions. County governments have a strong desire to know that somebody plans to serve every unserved and underserved location in the county. Their biggest fear is that the big grants will come and go, and some of their folks still get no broadband solution.

ISPs that really want to serve specific areas need to be ready by the first round of grant filings – and communities should be pushing them to do so. It’s likely that the earliest rounds of state grants will be oversubscribed and many grants will not be made until later rounds – but if somebody beats an ISP to an area you want to serve, the opportunity might be gone.