Filling a Regulatory Void

Earlier this year, the Ninth Circuit Court of Appeals upheld the net neutrality regulations enacted by California. The appeal case was filed on behalf of big ISPs by ACA Connect, CTIA, NCTA, and USTelecom.

The case stems from the California net naturality legislation passed in 2018. The California law was a direct reaction to the Ajit Pai FCC that not only killed federal net neutrality rules but also wiped out most federal regulation of broadband. The California legislation made it clear that the State doesn’t want ISPs to have an unfettered ability for bad behavior.

The California net neutrality rules are straightforward. The law applies to both landline and mobile broadband. Specifically, the California net neutrality law:

  • Prohibits ISPs from blocking lawful content.
  • Prohibits ISPs from impairing or degrading lawful Internet traffic except as is necessary for reasonable network management.
  • Prohibits ISPs from requiring compensation, monetary or otherwise, from edge providers (companies like Netflix or Google) for delivering Internet traffic or content.
  • Prohibits paid prioritization.
  • Prohibits zero-rating.
  • Prohibits interference with an end user’s ability to select content, applications, services, or devices.
  • Requires the full and accurate public disclosure of network management practices, performance, and clearly worded terms of service.
  • Prohibits ISPs from offering any product that evades any of the above prohibitions.

This is an interesting step in the battle to regulate ISPs. The big ISPs put a huge amount of money and effort into getting the FCC under Ajit Pai to kill federal broadband regulation. There has been a long-standing tradition in the telecom world that cedes that the FCC has the power to make federal rules, but states have always been free to regulate issues not mandated by the FCC. There have been some tussles over the years between states and the FCC, but courts have consistently sided with the FCC’s authority to make national rules. When the FCC walked away from most broadband regulation it created a regulatory void that tradition would imply that states are allowed to fill.

Losing this court case creates a huge dilemma for big ISPs. California is such a large part of the economy that it would be hard for ISPs to follow this law in California and not follow it elsewhere. It also seems likely that other states will now pass similar laws over the next few years, and that will create the worst possible nightmare for big ISPs – different regulations in different states.

I’ve always adhered to the belief that there is a regulatory pendulum. When regulations get too tough for a regulated industry, there is usually a big push to lighten the regulatory burden. But when the pendulum swings the other way and regulation gets too slack, there is inevitably a big push to put more restrictions on the industry being regulated. In this case, the ISPs and Ajit Pai went too far by eliminating most meaningful federal broadband regulation. There is nothing surprising about California and other states reacting to the lack of federal regulation.

With this court decision, there is nothing to stop a dozen states from creating net neutrality rules or tackling the other regulations that got voided by the Ajit Pai FCC. It’s also not hard to predict that the big ISPs will now push to create a watered-down federal version of net neutrality as a way to override a plethora of state rules.

I said earlier that this is a dilemma for large ISPs because it is extremely rare and not easy for a small ISP to violate net neutrality principles. The California rules will require ISPs to create more plain English terms of service, but otherwise, small ISPs in California will not likely be bothered by any of these rules.

For the big ISPs, this is a harsh reminder that the regulatory pendulum always swings back. It’s not hard to envision celebration behind the scenes at the big ISPs when they convinced the FCC to give them everything on their wish list. But when regulations get out of balance, there is inevitably pushback in the other direction.

There is still one piece of unfinished business in this case. There is still an open issue in the court examining if the California law impinges on interstate commerce. But the Ninth Circuit’s ruling made it clear that California is free to enforce its version of net neutrality within the state.

Net Neutrality Again?

There is an interesting recent discussion in Europe about net neutrality that has relevance to the U.S. broadband market. The European Commission that oversees telecom and broadband has started taking comments on a proposal to force content generators like Netflix to pay fees to ISPs for using the Internet. I’ve seen this same idea circulating here from time to time, and in fact, this was one of the issues that convinced the FCC to first implement net neutrality.

Netflix generates less than 10% of the broadband traffic in Europe and European ISPs think that Netflix should pay a substantial fee for using the Internet network. Europe looks a lot like the U.S., and Netflix, Meta, Amazon, Google, Apple, and Microsoft generate most of the traffic there. Online video accounts for 65 percent of all traffic on the web. Netflix argues that the amount of video on the web will continue to climb and that any fees charged to video providers will eventually be applied to a wider range of content providers.

It’s an interesting topic that can be considered from different perspectives. First, companies like Netflix already spend a lot of money to use the network today. Just like in the U.S., Netflix has built or purchased transport to allow local peering. Netflix claims to be providing 18,000 local servers around the world in 175 countries to move its video signals closer to ISP networks. This relieves a lot of volume on the web core and also improves the quality of Netflix content. The same is true for other content providers, and in the U.S., there are a lot of local peering points that have been created by Google, Meta, and others.

Netflix makes the point that the big ISPs in Europe are already profitable and the ISPs would simply pocket any new revenue stream. They are highly skeptical that any benefit to ISPs from charging Netflix would be passed on to Netflix customers through lower broadband prices.

When net neutrality was discussed in the U.S., there was a good argument made by content providers that subscribers are already paying for end-to-end use of the Internet in the monthly fees paid to ISPs. Charging the content providers for using the Internet would amount to billing twice for the same traffic. Since the original net neutrality discussion here, U.S. broadband prices charged by cable companies have increased significantly, making it even more true that customers are supporting the Internet.

Another way to think about the issue is that video is the service that drives a lot of households to buy broadband. Without Netflix and the other online video content providers, there would not be nearly as many broadband users, and ISPs would not have such a large market share. There is a truism in the industry that says you shouldn’t build a broadband network solely to provide entertainment to customers, but there is no denying that there are a lot of homes that wouldn’t buy broadband if it wasn’t for video and social media. Not everybody works from home or has students that need broadband for schoolwork.

There are several reasons why I am highlighting this European issue. Topics that become issues in Europe invariably are raised as issues here, and vice versa. If American ISPs see that European ISPs have been able to extract payments from Netflix, our ISPs will immediately start making the same demands here.

The other interesting aspect of this particular argument is that it’s something that we already solved once in the past when the FCC passed net neutrality rules. But the Ajit Pai FCC tossed out those rules, so it was inevitable that net neutrality topics would eventually come to life here again.

The net neutrality issue is one of the most interesting topics from a regulatory perspective. Even after Ajit Pai tossed out the net neutrality rules, American ISPs didn’t change their behavior. There are two possible reasons for this. I think ISPs have tried to keep a cap on behavior that would induce regulators to try to put net neutrality back in place again. It seems that perhaps the mere threat of reintroducing net neutrality has kept ISPs in check. However, I find it likely that ISPS are now feeling braver after having squashed the proposed fifth FCC Commissioner.

The other reason is that California put its own version of net neutrality rules in place. This has slowly made its way through the courts and is now in effect. ISPs might not be willing to take on California, because to do so might invite many other states to pass different version of the same rules. As much as ISPs hate the idea of federal regulations, they don’t like, the= biggest fear is a hodgepodge of different regulations in states.

More Mapping Drama

As if the federal mapping process needed more drama, Senator Jacky Rosen (Dem-Nevada) and John Thune (Rep-South Dakota) have introduced bill S.1162 that would “ensure that broadband maps are accurate before funds are allocated under the Broadband Equity, Access, and Deployment Program based on those maps”.

If this law is enacted, the distribution of most of the BEAD grant funds to States would be delayed by at least six months, probably longer. The NTIA has already said that it intends to announce the allocation of the $42.5 billion in grants to the states on June 30. The funds are supposed to be allocated using the best count of unserved and underserved locations in each state on that date. Unserved locations are those that can’t buy broadband of at least 25/3 Mbps. Underserved locations are those unable to buy broadband with speeds of at least 100/20 Mbps.

To add to the story, FCC Commissioner Jessica Rosenworcel recently announced that the FCC has largely completed the broadband map updates. That announcement surprised the folks in the industry who have been working with the map data, since everybody I talk to is still seeing a lot of inaccuracies in the maps.

To the FCC’s credit, its vendor CostQuest has been processing thousands of individual challenges to the maps daily and has addressed 600 bulk challenges that have been filed by States, counties, and other local government entities. In making the announcement, Rosenworcel said that the new map has added over one million new locations to the broadband map – homes and businesses that were missed in the creation of the first version of the map last fall.

But the FCC map has two important components that must be correct for the overall maps to be correct. The first is the mapping fabric that is supposed to identify every location in the country that is a potential broadband customer. I view this as a nearly impossible task. The US Census spends many billions every ten years to identify the addresses of residents and businesses in the country. CostQuest tried to duplicate the same thing on a much smaller budget and with the time pressure of the maps being used to allocate these grants. It’s challenging to count potential broadband customers. I wrote a blog last year that outlined a few of the dozens of issues that must be addressed to get an accurate map. It’s hard to think that CostQuest somehow figured out all of these complicated questions in the last six months.

Even if the fabric is much improved, the more important issue is that the accuracy of the broadband map is reliant on two issues that are reported by ISPs – the coverage area where an ISP should be able to connect a new customer within ten days of a request, and the broadband speeds that are available to a home or business at each location.

ISPs are pretty much free to claim whatever they want. While there has been a lot of work done to challenge the fabric and the location of possible customers – it’s a lot harder to challenge the coverage claims of specific ISPs. A true challenge would require many millions of individual challenges about the broadband that is available at each home.

Just consider my own home. The national broadband map says there are ten ISPs available at my address. Several I’ve never heard of, and I’m willing to bet that at least a few of them can’t serve me – but since I’m already buying broadband from an ISP, I can’t think of any reason that would lead me to challenge the claims of the ISPs I’m not using. The FCC thinks that the challenge process will somehow fix the coverage issue – I can’t imagine that more than a tiny fraction of folks are ever going to care enough to go through the FCC map challenge process – or even know that the broadband map exists.

The FCC mapping has also not yet figured out how to come to grips with broadband coverage claimed by wireless ISPs. It’s not hard looking through the FCC data to find numerous WISPs that claim large coverage areas. In real life, the availability of a wireless connection is complicated. The FCC reporting is in the process of requiring wireless carriers to report using a ‘heat map’ that shows the strength of the wireless signal at various distances from each individual radio. But even these heat maps won’t tell the full story. WISPs are sometimes able to find ways to serve customers that are not within easy reach of a tower. But just like with cellphone coverage, there are usually plenty of dead zones around a radio that can’t be reached but that will still be claimed on a heat map – heat maps are nothing more than a rough approximation of actual coverage. It’s hard to imagine that wireless coverage areas will ever be fully accurate.

DSL coverage over telephone copper is equally impossible to map correctly, and there are still places where DSL is claimed but which can’t be served.

Broadband speeds are even harder to challenge. Under the FCC mapping rules, ISPs are allowed to claim marketing speeds. If an ISP markets broadband as capable of 100/20 Mbps, they can claim that speed on the broadband map. It doesn’t matter if the actual broadband delivered is only a fraction of that speed. There are so many factors that affect broadband speeds that the maps will never accurately depict the speeds folks can really buy. It’s amazingly disingenuous for the FCC to say the maps are accurate. The best we could ever hope for is that the maps will be better if, and only if ISPs scrupulously follow the reporting rules – but nobody thinks that is going to happen.

I understand the frustration of the Senators who are suggesting this legislation. But I also think that we’ll never get an accurate set of maps. Don’t forget that Congress created the requirement to use the maps to allocate the BEAD grant dollars. Grant funding could have been done in other ways that didn’t relay on the maps. I don’t think it’s going to make much difference if we delay six months, a year, or four years – the maps are going to remain consistently inconsistent.

Is Broadband Regulation Dead?

I ask this question after Gigi Sohn recently withdrew her name from consideration as an FCC Commissioner. It’s been obvious for a long time that the Senate was never going to approve her nomination. Some Senators tried to blame their reluctance to approve on Sohn’s history as an advocate for the public over big corporations.

But the objections to Sohn were all the kinds of smokescreens that politicians use to not admit the real reason they opposed the nomination. Gigi Sohn is not going to be the next Commissioner because she is in favor of regulating broadband and the public airwaves. The big ISPs and the large broadcasting companies (some companies which are both) have been lobbying hard against the Sohn nomination since it was first announced. These giant corporations don’t want a third Democratic Commissioner who is pro-regulation.

In the past, the party that held the White House was able to nominate regulators to the FCC and other regulatory agencies that reflected the philosophies of their political party. That’s been a given in Washington DC, and agencies like the FCC have bounced back and forth between different concepts of what it means to regulate according to which party controlled the White House.

But I think the failure to approve Sohn breaks the historical convention that lets the political party in power decide who to add as regulators. I predict this will not end with this failed nomination. Unless the Senate gets a larger majority for one of the parties, I have a hard time seeing any Senate that is going to approve a fifth FCC Commissioner. If Republicans win the next presidential race, their nominee for the fifth Commissioner slot will also likely have no chance of getting approved.

The primary reason for this is that votes for an FCC Commissioner are no longer purely along party lines. The large ISPs and broadcasters make huge contributions to Senators for the very purpose of influencing this kind of issue. That’s not to say that there will never be a fifth Commissioner, but rejecting this nomination means it’s going to be a lot harder in the future to seat FCC Commissioners who embrace the position of the political party in power, like was done by Ajit Pai and likely would have been done by Gigi Sohn.

I think we’re now seeing the textbook example of regulatory capture. That’s an economic principle that describes a situation where regulatory agencies are dominated by the industries they are supposed to be regulating. Economic theory says that it’s necessary to regulate any industry where a handful of large players control the market. Good regulation is not opposed to the large corporations being regulated but should strike a balance between what’s good for the industry and what’s good for the public. In a perfectly regulated industry, both the industry and the public should be miffed at regulators for not fully supporting their issues.

The concept of regulatory capture was proposed in the 1970s by George Stigler, a Nobel prize-winning economist. He outlined the characteristics of regulatory capture that describes the broadband industry to a tee.

  • Regulated industries devote a large budget to influence regulators at the federal, state, and local levels. It’s typical that citizens don’t have the wherewithal to effectively lobby the public’s side of issues.
  • Regulators tend to come from the regulated industry, and they tend to take advantage of the revolving door to return to industry at the end of their stint as a regulator.
  • In the extreme cases of regulatory capture, the incumbents are deregulated from any onerous regulations while new market entrants must jump through high hoops.

The FCC is a textbook example of a captured regulator. The FCC under Ajit Pai went so far as to deregulate broadband and to wash the FCC’s hands of broadband as much as possible by theoretically passing the little remaining regulation to the FTC. It’s hard to imagine an FCC more under the sway of the broadband industry than the last one.

There is no real fix for regulatory capture other than a loud public outcry to bring back strong regulation. But that’s never going to happen when regulatory capture is so complete so that it’s impossible to even seat a fifth Commissioner.

Lets Stop Talking About Technology Neutral

A few weeks ago, I wrote a blog about the misuse of the term overbuilding. Big ISPs use the term to give politicians a phrase to use to shield the big companies from competition. The argument is always phrased about how federal funds shouldn’t be used to overbuild where an ISP is already providing fast broadband. What the big ISPs really mean is that they don’t want to have competition anywhere, even where they still offer outdated technologies or where they have neglected networks.

Today I want to take on the phrase ‘technology neutral’. This phrase is being used to justify building technologies that are clearly not as good as fiber. The argument has been used a lot in recent years to say that grants should be technology neutral so as not to favor only fiber. The phrase was used a lot to justify allowing Starlink into the RDOF reverse auction. The phrase has been used a lot to justify allowing fixed wireless technology to win grants, and lately, it’s being used more specifically to allow fixed wireless using unlicensed spectrum into the BEAD grants.

The argument justifies allowing technologies like satellite or fixed wireless using unlicensed spectrum to get grants since the technologies are ‘good enough’ when compared to the requirement of grant rules.

I have two arguments to counter that justification. The only reason the technology neutral argument can be raised is that politicians set the speed requirements for grants at ridiculously low levels. Consider all of the current grants that set the speed requirement for technology at 100/20 Mbps. The 100 Mbps speed requirement is an example of what I’ve recently called underbuilding – it allows for building a technology that is already too slow today. At least 80% of folks in the country today can buy broadband from a cable company or fiber company. Almost all of the cable companies offer download speeds as fast as a gigabit. Even in older cable systems, the maximum speeds are faster than 100 Mbps. Setting a grant speed requirement of only 100 Mbps download is saying to rural folks that they don’t deserve broadband as good as what is available to the large majority of people in the country.

The upload speed requirement of 20 Mbps was a total political sellout. This was set to appease the cable companies, many which struggle to beat that speed. Interestingly, the big cable companies all recognize that their biggest market weakness is slow upload speeds, and most of them are working on plans to implement a mid-split upgrade or else some early version of DOCSIS 4.0 to significantly improve upload speed. Within just a few years, the 20 Mbps upload speed limit is going to feel like ancient history.

The BEAD requirement of only needing to provide 20 Mbps upload is ironic for two reasons. First, in cities, the cable companies will have much faster upload speeds implemented by the time that anybody builds a BEAD network. Second, the cable companies that are pursuing grants are almost universally using fiber to satisfy those grants. Cable companies are rarely building coaxial copper plant for new construction. This means the 20 Mbps speed was set to protect cable companies against overbuilding – not set as a technology neutral speed that is forward looking.

The second argument against the technology neutral argument is that some technologies are clearly not good enough to justify receiving grant dollars. Consider Starlink satellite broadband. It’s a godsend to folks who have no alternatives, and many people rave about how it has solved their broadband problems. But the overall speeds are far slower than what was promised before the technology was launched. I’ve seen a huge number of speed tests for Starlink that don’t come close to the 100/20 Mbps speed required by the BEAD grants.

The same can be said for FWA wireless using cellular spectrum. It’s pretty decent broadband for folks who live within a mile or two of a tower, and I’ve talked to customers who are seeing speeds significantly in excess of 100/20 Mbps. But customers just a mile further away from a tower tell a different story, where download speeds are far under 100 Mbps download. A technology that has such a small coverage area does not meet the technology neutral test unless a cellular company promises to pepper an area with new cell towers.

Finally, and a comment that always gets pushback from WISPs, is that fixed wireless technology using unlicensed spectrum has plainly not been adequate in most places. Interference from the many users of unlicensed spectrum means the broadband speeds vary depending on whatever is happening with the spectrum at a given moment. Interference on the technology also means higher latency and much higher packet losses than landline technologies.

I’ve argued until I am blue in the face that grant speed requirements should be set for the speeds we expect a decade from now and not for the bare minimum that makes sense today. It’s ludicrous to allow award grant funding to a technology that barely meets the 100/20 Mbps grant requirement when that network probably won’t be built until 2025. The real test for the right technology for grant funding is what the average urban customer will be able to buy in 2032. It’s hard to think that speed won’t be something like 2 Gbps/200 Mbps. If that’s what will be available to a large majority of households in a decade it ought to be the technology neutral definition of speed to qualify for grants.

BEAD Grants for Small Pockets of Customers

One of the most interesting aspects of the BEAD grants is that the funding is intended to make sure that everybody gets broadband. There is one section of the grant rules that talk about how the funding can be used to serve areas as small as a single home. Following are two quotes from the BEAD rules:

Project—The term “project” means an undertaking by a subgrantee to construct and deploy infrastructure for the provision of broadband service. A “project” may constitute a single unserved or underserved broadband-serviceable location, or a grouping of broadband-serviceable locations in which not less than 80 percent of broadband-serviceable locations served by the project are unserved locations or underserved locations.

Unserved Service Project—The term “Unserved Service Project” means a project in which not less than 80 percent of broadband-serviceable locations served by the project are unserved locations. An “Unserved Service Project” may be as small as a single unserved broadband-serviceable location.

This is something that is badly needed because in every county I’ve worked in, there are small pockets of folks that have been left out of other broadband expansion projects. To give an example, I was working with a county where there is a small pocket of about fifteen homes that are between the areas funded by two state grants. The homes are along a State highway, which means higher construction costs. The earlier state grant applicants ignored the area because of the high costs.

I’m curious about how small areas like this one can fit into the complicated BEAD grant rules. I’m sure the two different ISPs that decided not to build these area would do so if they got enough funding – which should be available from BEAD. But I can’t picture any ISP going through the massive hassle of plowing through the BEAD application and the myriad of rules to get the money to serve fifteen homes. I already know a lot of small ISPs that are thinking about skipping the BEAD grants entirely because of the complexity.

I’ll be interested to see how the State Broadband offices tackle this issue when they publish their draft grant rules. I would not expect any ISP to ask to serve small pockets of customers if they have to jump through the full gamut of the BEAD hoops. Will State Broadband offices come up with a simpler mechanism for these stray pockets of homes?

We’ve seen simpler mechanisms used for small pockets of homes in some state grants. For example, several states have used the concept of loop extension grants to fund homes that are close to an existing broadband network. These grants fund drops and customer electronics only and not the infrastructure wiring along the streets. The loop extension grants can be requested for a single home or groups of homes in a neighborhood.

Will a State be allowed to deviate from the NTIA grant rules to reach the many tiny clusters that will otherwise not get broadband? A lot of the complicated rules for BEAD were dictated by Congressional legislation, and it might not be possible to hand out money to anybody that doesn’t meet all of those federal requirements. If an ISP needs a letter of credit, an environmental study, and to jump through many other onerous hoops, I can’t picture any ISP that will be willing to tackle small pockets of customers. Unfortunately, the language above classifies building to a single home as a project probably means that all of the rules associated with the BEAD grants will apply.

Digital Discrimination

The FCC recently opened a docket, at the prompting of federal legislation, that asks for examples of digital discrimination. The docket asks folks to share stories about how they have had a hard time obtaining or keeping broadband, specifically due to issues related to zip code, income level, ethnicity, race, religion, or national origin.

The big cable companies and telcos are all going to swear they don’t discriminate against anybody for any reason, and every argument they make will be pure bosh. Big corporations, in general, favor more affluent neighborhoods over poor ones. Neighborhoods that don’t have the best broadband networks are likely going to be the same neighborhoods that don’t have grocery stores, gas stations, retail stores, restaurants, banks, hotels, and a wide variety of other kinds of infrastructure investment from big corporations. The big cable companies and telcos are profit-driven and focused on stock prices, and they make many decisions based on the expected return to the bottom line – just like other large corporations.

There is clearly discrimination by ISPs by income level. It’s going to be a lot harder to prove discrimination by ethnicity, race, religion, or national origin, although it’s likely that some stories of this will surface in this docket. But discrimination based on income is everywhere we look. There are two primary types of broadband discrimination related to income – infrastructure discrimination and price discrimination.

Infrastructure discrimination for broadband has been happening for a long time. It doesn’t take a hard look to see that telecom networks in low-income neighborhoods are not as good as those in more affluent neighborhoods. Any telecom technician or engineer can point out a dozen of differences in the quality of the infrastructure between neighborhoods.

The first conclusive evidence of this came years ago from a study that overlaid upgrades for AT&T DSL over income levels, block by block in Dallas. The study clearly showed that neighborhoods with higher incomes got the upgrades to faster DSL during the early 2000s. The differences were stark, with some neighborhoods stuck with first-generation DSL that delivered 1-2 Mbps broadband while more affluent neighborhoods had been upgraded to 20 Mbps DSL or faster.

It’s not hard to put ourselves into the mind of the local AT&T managers in Dallas who made these decisions. The local manager would have been given an annual DSL upgrade budget and would have decided where to spend it. Since there wasn’t enough budget to upgrade everywhere, the local manager would have made the upgrades in neighborhoods where faster cable company competition was taking the most DSL customers – likely the more affluent neighborhoods that could afford the more expensive cable broadband. There were probably fewer customers fleeing the more affordable DSL option in poor neighborhoods where the price was a bigger factor for consumers than broadband speeds.

These same kinds of economic decisions have been played out over and over, year after year by the big ISPs until affluent neighborhoods grew to have better broadband infrastructure than poorer neighborhoods. Consider a few of the many examples of this:

  • I’ve always noticed that there are more underground utilities in newer and more affluent neighborhoods than in older and poorer ones. This puts broadband wires safely underground and out of reach from storm damage – which over time makes a big difference in the quality of the broadband being delivered. Interestingly, the decision of where to force utilities to be underground is done by local governments, and to some degree, cities have contributed to the difference in infrastructure between affluent and low-income neighborhoods.
  • Like many people in the industry, when I go to a new place, I automatically look up at the conditions of poles. While every place is different, there is clearly a trend to have taller and less cluttered poles in more affluent parts of a city. This might be because competition brought more wires to a neighborhood, which meant more make-ready work done to upgrade poles. But I’ve spotted many cases where poles in older and poorer neighborhoods are the worst in a community.
  • It’s easy to find many places where the Dallas DSL story is being replayed with fiber deployment. ISPs of all sizes cherry-pick the neighborhoods that they perceive to have the best earnings potential when they bring fiber to a new market.

We are on the verge of having AI software that can analyze data in new ways. I believe that we’ll find that broadband discrimination against low-income neighborhoods runs a lot deeper than the way we’ve been thinking about it. My guess is that if we map all of the infrastructure related to broadband we’d see firm evidence of the infrastructure differences between poor and more affluent neighborhoods.

I am sure that if we could gather the facts related to the age of the wires, poles, and other infrastructure, we’d find the infrastructure in low-income neighborhoods is significantly older than in other neighborhoods. Upgrades to broadband networks are usually not done in a rip-and-replace fashion but are done by dozens of small repairs and upgrades over time. I also suspect that if you could plot all of the small upgrades done over time to improve networks, you’d find more of these small upgrades, such as replacing cable company power taps and amplifiers, to have been done in more affluent neighborhoods.

We tend to think of broadband infrastructure as the network of wires that brings fast Internet to homes, but modern broadband has grown to be much more than that, and there is a lot of broadband infrastructure that is not aimed at home broadband. Broadband infrastructure has also come to mean small cell sites, smart grid infrastructure, and smart city infrastructure. I believe that if we could map everything related to these broadband investments we’d see more examples of discrimination.

Consider small cell sites. Cellular companies have been building fiber to install small cell sites to beef up cellular networks. I’ve never seen maps of small cell installations, but I would wager that if we mapped all of the new fiber and small cell sites we’d find a bias against low-income neighborhoods.

I hope one day to see an AI-generated map that overlays all of these various technologies against household incomes. My gut tells me that we’d find that low-income neighborhoods will come up short across the board. Low-income neighborhoods will have older wires and older poles. Low-income neighborhoods will have fewer small cell sites. Low-income neighborhoods won’t be the first to get upgraded smart grid technologies. Low-income neighborhoods won’t get the same share of smart city technologies, possibly due to the lack of other infrastructure.

This is the subtle discrimination that the FCC isn’t going to find in their docket because nobody has the proof. I could be wrong, and perhaps I’m just presupposing that low-income neighborhoods get less of every new technology. I hope some smart data guys can find the data to map these various technologies because my gut tells me that I’m right.

Price discrimination has been around for a long time, but I think there is evidence that it’s intensified in recent years. I first noticed price discrimination in the early price wars between the big cable companies and Verizon FiOS. This was the first widespread example of ISPs going head-to-head with decent broadband products where the big differentiator was the price.

I think the first time I heard the term ‘win-back program’ was related to cable companies working hard not to lose customers to Verizon. There are stories in the early days of heavy competition of Comcast keeping customers on the phone for a long time when a customer tried to disconnect service. The cable company would throw all sorts of price incentives to stop customers from leaving to go to Verizon. Over time, the win-back programs grew to be less aggressive, but they are still with us today in markets where cable companies face stiff competition.

I think price competition has gotten a lot more subtle, as witnessed by a recent study in Los Angeles that showed that Charter offers drastically different online prices for different neighborhoods. I’ve been expecting to see this kind of pricing for several years. This is a natural consequence of all of the work that ISPs have done to build profiles of people and neighborhoods. Consumers have always been leery about data gathered about them, and the Charter marketing practices by neighborhood are the natural endgame of having granular data about the residents of LA.

From a purely commercial viewpoint, what Charter is doing makes sense. Companies of all sorts use pricing to reward good existing customers and to lure new customers. Software companies give us a lower price for paying for a year upfront rather than paying monthly. Fast food restaurants, grocery stores, and a wide range of businesses give us rewards for being regular customers.

It’s going to take a whistleblower to disclose what Charter is really doing. But the chances are it has a sophisticated software system that gives a rating for individual customers and neighborhoods based on the likelihood of customers buying broadband or churning to go to somebody else. This software is designed to offer a deeper discount in neighborhoods where price has proven to be an effective technique to keep customers – without offering lower prices everywhere.

I would imagine the smart numbers guy who devised this software had no idea that it would result in blatant discrimination – it’s software that lets Charter maximize revenue by fine-tuning the price according to a computer prediction of what a given customer or neighborhood is willing to pay. There has been a lot of speculation about how ISPs and others would integrate the mounds of our personal data into their businesses, and it looks like it has resulted in finely-tuned price discrimination by city block.

Is There a Fix for Digital Discrimination?

The big news in the broadband industry is that we are in the process of throwing billions of dollars to solve the ultimate case of economic discrimination – the gap between urban and rural broadband infrastructure. The big telcos completely walked away from rural areas as soon as they were deregulated and could do so. The big cable companies never made investments in rural areas due to the higher costs. The difference between urban and rural broadband networks is so stark that we’ve decided to cure decades of economic discrimination by throwing billions of dollars to close the gap.

But nobody has been seriously looking at the more subtle manifestation of the same issue in cities. The FCC is only looking at digital discrimination because it was required by the Infrastructure Act. Does anybody expect that anything will come out of the stories of discrimination? ISPs are going to say that they don’t discriminate. If pinned down, they will say that what looks like discrimination is only the consequence of them making defensible economic decisions and that there was no intention to discriminate.

Most of the discrimination we see in broadband is due to the lack of regulation of ISPs. They are free to chase earnings as their top priority. ISPs have no regulatory mandate to treat everybody the same. The regulators in the country chose to deregulate broadband, and the digital discrimination we see in the market is the direct consequence of that choice. When AT&T was a giant regulated monopoly we required it to charge everybody the same prices and take profits from affluent customers to support infrastructure and prices in low-income neighborhoods and rural places. Regulation wasn’t perfect, but we didn’t have the current gigantic infrastructure and price gaps.

If people decide to respond to this FCC docket, we’ll see more evidence of discrimination based on income. We might even get some smoking gun evidence that some of the discrimination comes from corporate bias based on race and other factors. But discrimination based on income levels is so baked into the ways that corporations act that I can’t imagine that anybody thinks this docket is going to uncover anything we don’t already know.

I can’t imagine that this investigation is going to change anything. The FCC is not going to make big ISPs spend billions to clean up broadband networks in low-income neighborhoods. While Congress is throwing billions at trying to close the rural broadband gap, I think we all understand that anywhere that the big corporations take the rural grant funding that the infrastructure is not going to be maintained properly and that in twenty years we’ll be having this same conversation all over again. We know what is needed to fix this – which is regulation that forces ISPs to do the right thing. But I doubt we’ll ever have the political or regulatory will to force the big ISPs to act responsibly.

Epic Broadband Outages

Every once in a while I hear a customer story that reinforces the big mistake we made in largely eliminating broadband regulation. This particular story comes from the Chatham News + Record in Chatham County, North Carolina. Some customers there experienced what can only be described as epic outages.

The first outage occurred on October 1 to residents near Charlie Cooper Road from a downed line as the result of hurricane Ian. Duke Power restored power within two days, but it took twenty days for Brightspeed to repair the damage. This is the new incumbent telephone company that purchased the property from CenturyLink. Not to give Brightspeed an excuse, but the outage occurred while the network was still owned by CenturyLink – the sale of the network closed on October 3, two days after the outage. Twenty days is still an extraordinarily a long time to make a line repair, but I’ve been part of the aftermath of the sales of telecom properties, and the first thirty days are often rough on the buyer.

The second outage occurred in the same rural neighborhood on November 28 when a tractor-trailer pulled down wires that were hanging too low. Residents believe that the low wires were a result of a shoddy repair from the hurricane Ian outage. By this time, Brightspeed had owned the company for two months, and it took a full month, until December 27, to restore service.

Customers were highly frustrated because they got no useful response from Brightspeed customer service. There seemed to be no acknowledgment that there was an outage, even as multiple people called multiple times to complain about the outage.

This is not an unusual story in rural America. I’ve talked to dozens of folks who are rural customers of big telcos who have lost broadband for more than a week at a time, and some of them regularly lose service multiple times per year.

The article describes the problems the outages caused for residents. One resident was quoted as saying that broadband access has become as important as having water to the home.

One would think that consumers with this kind of problem could seek relief from the State – that a regulator could intervene to get the telephone company’s attention. When I was first in the industry, a customer complaint that was referred from sent a state commission got an instant priority inside a telephone company.

But a workable complaint process is now a thing of the past. The rules for making a consumer complaint with the North Carolina Utility Commission are a barrier to the average consumer and seem to favor big telcos. It’s not even clear if the NCUC has jurisdiction over broadband – that’s not clear anywhere after the FCC under Ajit Pai walked away from all broadband regulation. The NCUC still lightly regulates telephone service, but it’s not clear if that applies in the same way to broadband.

Regardless of the regulatory issues, the process for filing a complaint is not simple. A consumer must complete an official complaint form and file an original and 15 paper copies – complaints cannot be filed online or by email. The NCUC sends a copy of the complaint to the utility, which must respond in ten days. If the suggested solution from the utility is not adequate, the consumer can either drop the complaint or ask for a formal hearing – which would be an intimidating process for most folks, because the hearing is held in a formal court setting following normal court rules. Not many consumers are going to wade through this highly formal process, which is slanted in favor of utilities and their attorneys and not consumers.

The reality is that consumers have been at the mercy of the big telcos ever since state commissions deregulated telephone service. I’ve heard hundreds of stories over the years of big telcos who have run rough-shod over folks. One of the most common stories I’ve heard in the last few years is of telcos disconnecting DSL rather than trying to fix it.

The first outage for these folks could have slipped through the cracks due to the extraordinary event of the telephone company changing ownership right after the outage. But there is no possible excuse for the second month-long outage. Most of my clients are small ISPs, and they all would have fixed this second outage within a day. I’ve repeatedly cautioned about giving large rural grants to the large telcos, and this outage is one of a thousand reasons not to do so.

Counting Broadband Locations

All of the discussion of the FCC maps lately made me start thinking about broadband connections. I realized that many of my clients are providing a lot of broadband connections that are not being considered by the FCC maps. That led me to think that the old definition of a broadband passing is quickly growing obsolete and that the FCC mapping effort is missing the way that America really uses broadband today.

Let me provide some real-life examples of broadband connections provided by my clients that are not being considered in the FCC mapping:

  • Broadband connections to farm irrigation systems.
  • Broadband to oil wells and mining locations.
  • Broadband to wind turbines and solar farms.
  • Fiber connections to small cell sites.
  • Broadband electric substations. I have several electric company clients that are in the process of extending broadband to a huge number of additional field assets like smart transformers and reclosers.
  • Broadband to water pumps and other assets that control water and sewer systems.
  • Broadband to grain elevators, corn dryers, and other locations associated with processing or storing crops.
  • I’m working with several clients who are extending broadband for smart-city applications like smart streetlights, smart parking, and smart traffic lights.
  • Broadband to smart billboards and smart road signs.
  • Broadband for train yards and train switching hubs.
  • There are many other examples, and this was just a quick list that came to mind.

The various locations described above have one thing in common. Most are locations that don’t have a 911 street address. As such, these locations are not being considered when trying to determine the national need for broadband.

A lot of these locations are rural in nature – places like grain elevators, mines, oil wells, irrigation systems, wind turbines, and others. In rural areas, these locations are a key part of the economy, and in many places are unserved or underserved.

We are putting a huge amount of national energy into counting the number of homes and businesses that have or don’t have broadband. In doing so, we have deliberately limited the definition of a business to a place with a brick-and-mortar building and a 911 address. But the locations above are often some of the most important parts of the local economy.

I’ve read predictions that say in a few decades there will be far more broadband connections to devices than to people, and that rings true to me. I look around at the multiple devices in my home that use WiFi, and it’s not hard to envision that over time we will connect more and more locations and devices to broadband.

After a decade of talking about the inadequate FCC broadband maps, we finally decided to throw money at the issue and devise new maps. But in the decade it took to move forward, we’ve developed multiple non-traditional uses for broadband, a trend that is likely to expand. If we are really trying to define our national need for broadband, we need to somehow make sure that the locations that drive the economy are connected to broadband. And the only way to do that is to count these locations and put them on the broadband map, so somebody tries to serve them. The current maps are doing a disservice by ignoring the huge number of these non-traditional broadband connections.

Mass Confusion over FCC Mapping

You might not be surprised to hear that I am tired of talking about the FCC map. I spend way too much time these days answering questions about the maps. I understand why folks are confused because there are several major mapping timelines and issues progressing at the same time. It’s nearly impossible to understand the significance of the many dates that are being bandied around the industry.

The first issue is the FCC mapping fabric. The FCC recently encouraged state and local governments and ISPs to file bulk challenges to the fabric by June 30. This is the database that attempts to locate every location in the country that can get broadband. The first mapping fabric issued in June 2022 was largely a disaster. Large numbers of locations were missing from the first fabric, while the fabric also contains locations that don’t exist.

Most experienced folks that I know in the industry are unhappy with the fabric because its definition of locations that can get broadband is drastically different than the traditional way that the industry counts possible customers, which is commonly called passings. For example, the FCC mapping fabric might identify an apartment building or trailer park as one location, while the industry would count individual living units as potential customers. This disconnect means that the fabric will never be useful for counting the number of folks who have (or don’t have) broadband, which I thought was the primary reason for the new maps. Some folks have estimated that even a corrected fabric might be shy 30 or 40 million possible broadband customers.

Meanwhile, ISPs were instructed to use the original mapping fabric to report broadband coverage and speeds – the FCC 477 reporting process. The first set of the new 477 reporting was submitted on September 1, 2022. Many folks that have dug into the detail believe that some ISPs used the new reporting structure to overstate broadband coverage and speeds even more than was done in the older maps. The new maps globally show a lot fewer folks who can’t buy good broadband.

There is a second round of 477 reporting due on March 1. That second 477 reporting is obviously not going to use the revised mapping fabric, which will still be accepting bulk challenges until June 30. It could take much longer for those challenges to be processed. There have been some revisions to the fabric due to challenges that were made early, but some of the folks who made early map challenges are reporting that a large majority of the challenges they made were not accepted. This means that ISPs will be reporting broadband on top of a map that still includes the mistakes in the original fabric.

The FCC’s speed reporting rules still include a fatal flaw in that ISPs are allowed to report marketing broadband speeds rather than actual speeds. This has always been the biggest problem with FCC 477 reporting, and it’s the one bad aspect of the old reporting that is still in place. As long as an ISP that delivers 10 Mbps download still markets and reports its speeds as ‘up to 100 Mbps’, the maps are never going to be useful for any of the stated goals of counting customers without broadband.

Finally, the NTIA is required to use the FCC maps to determine how much BEAD grant funding goes to each state. NTIA announced that it will report the funding allocation on June 30. That date means that none of the mapping challenges that states and counties have been working on will be reflected in the maps used to allocate the grant funding. The NTIA announcement implies that only the earliest challenges to the maps might be included in the database used to determine the number of unserved and underserved locations in each state. States that have already made challenges know that those numbers include a lot of mistakes and missed a lot of locations.

Not only will the NTIA decision on funding allocation not include the large bulk challenges filed or underway by many state and local governments, but it won’t reflect the latest 477 reporting being submitted on March 1. There are several states that have made rumblings about suing the NTIA if they don’t get what they consider to be a fair allocation of the BEAD funding. If that happens, all bets are off if a court issues an injunction of the grant allocation process until the maps get better. I can’t help but be cynical about this since I can’t see these maps ever being good enough to count the number of homes that can’t buy broadband. This whole mapping process is the very definition of a slow-motion train wreck, and that means I’ll likely be answering questions about the maps for the indeterminate future.