Starlink and RDOF

In August, the FCC denied the SpaceX (Starlink) bid to receive $885 million over ten years through the RDOF subsidy. This is something that Starlink won in a reverse auction in December 2020.

In the press release for the rejection, FCC Chairman Jessica Rosenworcel was quoted as saying, “After careful legal, technical, and policy review, we are rejecting these applications. Consumers deserve reliable and affordable high-speed broadband. We must put scarce universal service dollars to their best possible use as we move into a digital future that demands ever more powerful and faster networks. We cannot afford to subsidize ventures that are not delivering the promised speeds or are not likely to meet program requirements.”

The FCC went on to say in the order that there were several technical reasons for the Starlink rejection. First was that Starlink is a “nascent” technology, and the FCC doubted the company’s ability to deliver broadband to 642,925 locations in the RDOF areas along with serving non-RDOF areas. The FCC also cited the Ookla speed tests that show that Starlink speeds decreased from 2021 into 2022.

Not surprisingly, Starlink appealed the FCC ruling this month. In the Starlink appeal, the company argued, “This decision is so broken that it is hard not to see it as an improper attempt to undo the commission’s earlier decision, made under the previous administration, to permit satellite broadband service providers to participate in the RDOF program. It appears to have been rendered in service to a clear bias towards fiber, rather than a merits-based decision to actually connect unserved Americans”.

Rather than focus on the facts in dispute in the appeal, today’s blog looks at the implications on the broadband industry during the appeal process. Current federal grant rules don’t allow federal subsidies to be given to any area that is slated to get another federal broadband subsidy. This has meant that the RDOF areas have been off-limits to other federal grants since the end of 2020. This has included NTIA grants, USDA ReConnect grants, and others. Federal grant applicants for the last few years have had to carefully avoid the RDOF areas for Starlink and any other unresolved RDOF award areas.

As a reminder, the RDOF areas were assigned by Census block and not in large coherent contiguous areas. The RDOF award areas have often been referred to as Swiss cheese, meaning that Census blocks that were eligible for RDOF were often mixed with nearby ineligible Census blocks. A lot of the Swiss cheese pattern was caused by faulty FCC maps that excluded many rural Census blocks from RDOF that should have been eligible, but for which a telco or somebody else was probably falsely claiming speeds at least 25/3 Mbps.

ISPs that have been contemplating grant applications in the unresolved RDOF areas were relieved when Starlink and other ISPs like LTE Broadband were rejected by the FCC. It’s difficult enough to justify building rural broadband, but it’s even harder when the area to be built is not a neat contiguous study area.

The big question now is what happens with the Starlink areas during an appeal. It seems likely that these areas will go back into the holding tank and remain off-limits to other federal grants. We’re likely going to need a definitive ruling on this from grant agencies like the USDA to verify, but logic would say that these areas still need to be on hold in case Starlink wins the appeal.

Unfortunately, there is no defined timeline for the appeal process. I don’t understand the full range of possibilities of such an appeal. If Starlink loses this appeal at the FCC, can the agency take the appeal on to a court? Perhaps an FCC-savvy lawyer can weigh in on this question in the blog comments. But there is little doubt that an appeal can take some time. And during that time, ISPs operating near the widespread Starlink grant areas are probably still on hold in terms of creating plans for future grants.

The FCC Mapping Fabric

You’re going to hear a lot in the next few months about the FCC’s mapping fabric. Today’s blog is going to describe what that is and describe the challenges of getting a good mapping fabric.

The FCC hired CostQuest to create the new system for reporting broadband usage. The FCC took a lot of criticism about the old mapping system that assumed that an entire Census block was able to buy the fastest broadband speed available anywhere in the Census block. This means that even if only one home is connected to a cable company, the current FCC map shows that everybody in the Census block can buy broadband from the cable company.

To fix this issue, the FCC decided that the new broadband reporting system would eliminate this problem by having an ISP draw polygons around areas where it already serves or could provide service within ten days after a customer request. If done correctly, the new method will precisely define the edge of cable and fiber networks.

The creation of the polygons creates a new challenge for the FCC – how to count the passings inside of any polygon an ISP draws. A passing is any home or business that is a potential broadband customer. CostQuest tried to solve this problem by creating a mapping fabric. A simplistic explanation is that they placed a dot on the map for every known residential and business passing. CostQuest has written software that allows them to count the dots of the mapping fabric inside of any possible polygon.

That sounds straightforward, but the big challenge was creating the dots with the actual passings. My consulting firm has been helping communities try to count passings for years as part of developing a broadband business plan, and it is never easy. Communities differ in the raw data available to identify passings. Many counties have GIS mapping data that shows the location of every building in a community. But the accuracy and details in the GIS mapping data differ drastically by county. We have often tried to validate GIS data to other sources of data like utility records. We’ve also validated against 911 databases that show each registered address. Even for communities that have these detailed records, it can be a challenge to identify passings. We’ve heard that CostQuest used aerial maps to count rooftops as part of creating the FCC mapping fabric.

Why is creating a fabric so hard? Consider residential passings. The challenge becomes apparent as soon as you start thinking about the complexities of the different living arrangements in the world. Even if you have great GIS data and aerial rooftop data, it’s hard to account for some of the details that matter.

  • How do you account for abandoned homes? Permanently abandoned homes are not a candidate for broadband. How do you make the distinction between truly abandoned homes and homes where owners are looking for a tenant?
  • How do you account for extra buildings on a lot. I know somebody who has four buildings on a large lot that has only a single 911 address. The lot has a primary residence and a second residence built for a family member. There is a large garage and a large workshop building – both of which would look like homes from an aerial perspective. This lot has two potential broadband customers, and it’s likely that somebody using GIS data, 911 data, or aerial rooftops won’t get this one property right. Multiply that by a million other complicated properties, and you start to understand the challenge.
  • Farms are even harder to count. It wouldn’t be untypical for a farm to have a dozen or more buildings. I was told recently by somebody in a state broadband office that it looks like the CostQuest mapping fabric is counting every building on farms – at least in the sample that was examined. If this is true, then states with a lot of farms are going to get a higher percentage of the BEAD grants than states that don’t have a lot of compound properties with lots of buildings.
  • What’s the right way to account for vacation homes, cabins, hunting lodges, etc.? It’s really hard with any of the normal data sources to know which ones are occupied full time, which are occupied only a few times per year, which have electricity, and which haven’t been used in many years. In some counties, these kinds of buildings are a giant percentage of buildings.
  • Apartment buildings are really tough. I know from working with local governments that they often don’t have a good inventory of the number of apartment units in each building. How is the FCC mapping data going to get this right?
  • I have no idea how any mapping fabric can account for homes that include an extra living space like an in-law or basement apartment. Such homes might easily represent two passings unless the two tenants decide to share one broadband connection.
  • And then there is the unusual stuff. I remember being in Marin County, California and seeing that almost every moored boat has a full-time occupant who wants a standalone broadband connection. The real world is full of unique ways that people live.

Counting businesses is even harder, and I’m not going to make the list of the complexities of defining business passings – but I think you can imagine it’s not easy.

I’m hearing from folks who are digging into the FCC mapping fabric that there are a lot of problems. ISPs say they can’t locate existing customers. They tell me there are a lot of mystery passings shown that they don’t think exist.

We can’t blame CostQuest if they didn’t get this right the first time – Americans are hard to count. I’m not sure this is ever going to be done right. I’m sitting here scratching my head and wondering why the FCC took this approach. I think a call to the U.S. Census would have gotten that advice that this is an impossible goal. The Census spends a fortune every ten years trying to identify where people live. The FCC has given itself the task of creating a 100% census of residences and businesses and updating it every six months.

The first set of broadband map challenges will be about the fabric, and I’m not sure the FCC is ready for the deluge of complaints they are likely to get from every corner of the country. I also have no idea how the FCC will determine if a suggestion to change the fabric is correct because I also don’t think communities can count passings perfectly.

This is not the only challenge. There are going to be challenges of the coverage areas claimed by ISPs. The big challenge, if the FCC allows it, will be about the claimed broadband speeds. If the FCC doesn’t allow that they are going to get buried in complaints. I think the NTIA was right to let the dust settle on challenges before using the new maps.

More WiFi Spectrum

There is more WiFi spectrum on the way due to the US Court of Appeals for the District of Columbia that rejected a legal challenge from the Intelligent Transportation Society of America and the American Association of State Highway and Transportation Officials that had asked to vacate the FCC’s 2020 order to repurpose some of the spectrum that had been reserved for smart cars.

The spectrum is called the 5.9 GHz band and sits between 5.85 GHz and 5.925 GHz. The FCC had decided to allocate the lowest 45 MHz of spectrum to WiFi while allowing the upper 30 MHz to remain with the auto industry.

The process will now begin to make the transition to WiFi. The FCC had originally given the auto industry a year to vacate the lower 45 MHz of spectrum. The FCC is likely going to have to set a new timeline to mandate the transition. The FCC also needs to rule on a waiver from the auto industry to redeploy technology using the Cellular Vehicle-to-Everything (C-V2X) technology from the lower to the higher frequency band. This is the technology that most of the industry is using for testing and deploying self-driving vehicles.

The lower 45 MHz of the new spectrum sits adjacent to the existing WiFi 5.8 GHz spectrum. Combining the new spectrum with the existing band is a boon to WISPs, which now get a larger uninterrupted swath of spectrum for point-to-multipoint broadband deployment. During the early stage of the pandemic, the FCC gave multiple WISPs the ability to use the 5.9 GHz spectrum on a trial basis for 60 days, and many of them have been regularly renewing that temporary licenses since then.

When the FCC announced the resolution of the lawsuit, the agency issued a press release discussing the benefits touted by WISPs for using the new spectrum. Some of them claimed to see between a 40% and 75% increase in throughput bandwidth. This was mostly due to less congestion on this spectrum, which is rarely used. There was little or no interference during the last year. The spectrum also provided a clear path for wireless backhaul between towers. Of course, once this is made available to all WISPs, it’s likely that much of this benefit will disappear as everybody starts vying to use the new spectrum. But it is an increase in bandwidth potential, and that has to mean higher quality wireless signals.

This spectrum will also be available for home WiFi. However, it takes a lot longer for the home WiFi industry to respond to new spectrum. It means upgrading home WiFi routers but also adding the capability to use the spectrum to the many devices in our homes and offices that use WiFi. Everything I’m reading says that we are still years away from seeing widespread use of the 6 GHz WiFi spectrum, and this new bandwidth will likely be rolled out at the same time.

This was an interesting lawsuit for several reasons. First, the entities filing the court suit challenged the FCC’s ability to change the use of spectrum in this manner. The court decision made it clear that the FCC is fully in the driver’s seat in terms of spectrum allocation.

This was also a battle between two large industries. The FCC originally assigned this spectrum to the auto industry twenty years ago. But the industry was slow to adopt any real-world uses of the spectrum, and it largely sat idle, except for experimental test beds. There is finally some movement toward deploying self-driving cars and trucks in ways that uses the spectrum. But even now, there is still a lot of disagreement about the best technology to use for self-driving vehicles. Some favor the smart road that uses spectrum to communicate with vehicles, while the majority opinion seems to favor standalone smart-driving technology in each vehicle.

Between this order and the 6 GHz spectrum, the FCC has come down solidly in favor of having sufficient WiFi spectrum going into the future. It’s clear that the existing bands of WiFi are already heavily overloaded in some settings, and the WiFi industry has been successful in getting WiFi included in huge numbers of new devices. I have an idea that we’ll look back twenty years from now and say that these new WiFi spectrum bands are not enough and that we’ll need even more. But this is a good downpayment to make sure that WiFi remains vigorous.

Another RDOF Auction?

There was a recent interview in FierceTelecom with FCC Commissioner Brandon Carr that covered a number of topics, including the possibility of a second round of RDOF. Commissioner Carr suggested that improvements would need to be made to RDOF before making any future awards, such as more vetting of participants upfront or looking at weighting technologies differently.

The FCC is building up a large potential pool of broadband funding. The original RDOF was set at $20 billion, with $4.4 billion set aside for a second reverse auction, along with whatever was left over from the first auction. The participants in the first RDOF auction claimed only $9.2 billion of $16 billion, leaving $6.8 billion. When the FCC recently decided not to fund LTD Broadband and Starlink, the leftover funding grew by another $2 billion. Altogether that means over $11 billion left in funds that were intended for RDOF.

We also can’t forget that around the same time as the RDOF that the FCC had planned to fund a 5G fund to enhance rural cellular coverage. Due to poor mapping and poor data from the cellular carriers, that auction never occurred. That puts the pool of unused funding at the FCC at $20 billion, plus whatever new FCC money might have accrued during the pandemic. That’s a huge pool of money equal to half of the giant BEAD grants.

The biggest question that must be asked before considering another RDOF reverse auction is how the country will be covered by the BEAD grants. It would be massively disruptive for the FCC to try to inject more broadband funding until that grant process plays out.

Commissioner Carr said that some of the FCC’s funding could go to enhance rural cellular coverage. Interestingly, once BEAD grant projects are built, that’s going to cost a lot less than was originally estimated. A lot of the money in the proposed 5G fund would have been used to build fiber backhaul to reach rural cell sites. I think the BEAD last-mile networks will probably reach most of those places without additional funding. However, there is probably still a good case to be made to fund more rural cell towers.

But there are larger questions involved in having another reverse auction. The big problem with the RDOF reverse auction was not just that the FCC didn’t screen applicants first, as Carr and others have been suggesting. The fact is that a reverse auction is a dreadful mechanism for awarding broadband grant money. A reverse auction is always going to favor lower-cost technologies like fixed wireless over fiber – it’s almost impossible to weight different technologies for an auction in a neutral way. It doesn’t seem like a smart policy to give federal subsidies to technologies with a 10-year life versus funding infrastructure that might last a century.

Reverse auctions also take state and local governments out of the picture. The upcoming BEAD funding has stirred hundred of communities to get involved in the process of seeking faster broadband. I think it’s clear that communities care about which ISP will become the new monopoly broadband provider in rural areas. If the FCC has a strict screening process up front, then future RDOF funding will only go to ISPs blessed by the FCC – and that probably means the big ISPs. I would guess that the only folks possibly lobbying for a new round of RDOF are companies like Charter and the big telcos.

The mechanism of awarding grants by Census block created a disaster in numerous counties where RDOF was awarded in what is best described as swiss cheese serving areas. The helter-skelter nature of the RDOF coverage areas makes it harder for anybody else to put together a coherent business plan to serve the rest of the surrounding rural areas. In contrast, states have been doing broadband grants the right way by awarding money to coherent and contiguous serving areas that make sense for ISPs instead of the absolute mess created by the FCC.

A reverse auction also relies on having completely accurate broadband maps – and until the FCC makes ISPs report real speeds instead of marketing speeds, the maps are going to continue to be fantasy in a lot of places.

Finally, the reverse auction is a lazy technique that allows the FCC to hand out money without having to put in the hard effort to make sure that each award makes sense. Doing grants the right way requires people and processes that the FCC doesn’t have. But we now have a broadband office and staff in every state thanks to the BEAD funding. If the FCC is going to give out more rural broadband funding, it ought to run the money through the same state broadband offices that are handling the BEAD grants. These folks know local conditions and know the local ISPs. The FCC could set overall rules about how the funds can be used, but it should let the states pick grant winners based upon demonstrated need and a viable business plan.

Of course, the simplest solution of all would be for the FCC to cut the USF rate and stop collecting Universal Service Fund revenues from the public. The FCC does not have the staff or skills needed to do broadband grants the right way. Unfortunately, that might not stop the FCC from tackling something like another RDOF auction so it can claim credit for having solved the rural digital divide. If the FCC plans on another RDOF auction I hope Congress stops them from being foolhardy again.

FCC Maps and Professional Engineers

When the FCC first adopted the new broadband data collection and mapping rules, the FCC had a requirement that ISPs must get FCC mapping data certified by a professional engineer or by a corporate officer that meets specific qualifications to make the certification. The genesis of this ruling was fairly clear – the FCC has taken a lot of flak about ISPs that have been submitting clearly inaccurate data about broadband coverage. To some degree, this was the FCC’s fault because the agency never reviewed what ISPs submitted, and there was no feedback or challenge mechanism for outsiders to complain about the maps – even though the FCC heard repeatedly about the poor quality of the maps. The FCC now wants an engineer to bless the coverage area for every ISP that submits broadband mapping data.

In July, the FCC temporarily backed off from that ruling since many ISPs are unable to find a professional engineer to bless its FCC reporting for the upcoming new mapping deadline in September. The FCC will allow ISPs to get coverage data certified by an experienced engineer for the first three FCC data collection cycles meaning that ISPs must comply with the original order by two years from now.

I think the FCC ruling is going to be harmful to small ISPs, and I’ll describe why below. But first, I want to highlight what ISPs must do for the current 477 mapping data due next month. ISPs still need to get somebody who is qualified to certify the broadband coverage area. Note that an engineer is not certifying the broadband speeds – the mapping issue that matters the most. ISPs have three choices of folks who can provide the certification:

  • They can get the coverage area certified by a professional engineer.
  • They can get the data certified by an engineer who meets the following qualifications: 1) a degree in electrical engineering, electronic technology, or similar technical degree along with seven years of experience in broadband network design and/or performance, or 2) somebody with specialized training relevant to broadband network engineering and ten years of experience in the field.
  • A corporate office of the ISP who has a degree in engineering and who also has direct knowledge of the network design. Note that this person must be a corporate officer and not just an employee of the ISP. ISPs cannot satisfy the future requirement by hiring a professional engineer unless that person also becomes a corporate officer. I’ll have to leave it up to lawyers to define what a corporate officer is, but I’m guessing a CTO is not a corporate officer.

The requirement to certify the biannual 477 data filings is going to be a burden for small ISPs for several reasons. First, as the FCC acknowledges in the recent ruling, there is a shortage of professional engineers in the broadband industry. I think this shortage is a lot more acute than the FCC understands. Big ISPs will have no problem meeting this requirement because these ISPs will meet the requirement with either a corporate officer with an engineering degree or by hiring a professional engineer.

The problem comes from the many small ISPs that don’t have a relationship with a professional engineer. Most small ISPs take great pride in that they’ve built the network themselves without paying for expensive external engineering or consulting help. Small ISPs must be frugal if they want to survive I’ve talked to several engineering companies in the industry, and they have zero interest in taking on new clients who only need them to certify FCC 477 filings. Engineering firms in the country are already working at full capacity due to the explosion of broadband grants and the general expansion of fiber networks. They view helping somebody with mapping to be busy paperwork rather than useful engineering. When the temporary FCC waiver is over, I don’t think little ISPs will find professional engineers willing to help them.

I also don’t think the FCC understands what it is requesting from a professional engineer. The FCC is asking the P.E. to certify that the ISP network reaches everywhere claimed in the 477 mapping data. Engineers are not going to be willing to sign a 477 certification without having done the research to fully understand the network. I can picture that easily costing $10,000 to $50,000, depending upon the complexity of the network. It’s clear in the mapping order that the FCC is counting on professional engineers to do that research – but I don’t think they understand how much this will cost a small ISP. Engineers are not going to certify a network without this research since they are putting their license on the line if they certify a network based solely on what an ISP tells them. As an aside, this requirement gets even more onerous if the P.E. must be certified in the same state as the ISP – some states have a major engineer shortage.

Second, I’m not sure that an engineer exists who can certify a WISP network with multiple radio sites. There are propagation models available that estimate the coverage of a given radio, and the FCC has suggested that those are an acceptable tool for understanding the reach of a given radio. But every engineer understands that propagation studies are largely fantasy after a short distance from a transmitter. There are local conditions like trees, buildings, and other impediments that can affect the reach of a radio that are not reflected in the propagation studies. WISPs usually don’t know if they can connect to a new customer until they visit the customer’s house and try to connect. How can an engineer certify the reach of a WISP network when a WISP doesn’t understand it?

I know that the FCC is trying to avoid the blame it has taken over the years for producing dreadful broadband maps. But in this case, the industry told the FCC why its requirements can’t work, and the agency ignored what they were told. Unfortunately, the FCC didn’t hear directly from the small ISPs – because it never does. These little companies don’t know what’s going on at the FCC and don’t make comments in dockets, even those that matter. For now, the FCC has booted this issue two years down the road – but I can promise that the same issues will exist then that exist now, and small ISPs will be unable to comply with this requirement, even if they want to.

Who Should Report to the FCC Mapping?

I think there are a lot of ISPs that are not participating in the FCC data collection effort that the industry refers to as the broadband maps. In almost every county I’ve ever worked in, I run across a few ISPs that are not reporting broadband usage. There are several categories of ISPs in this category.

I often run across small regional WISPs and occasionally across fiber overbuilders that are not listed in the database. I know these ISPs are there because people claim them as their ISP when we do a broadband survey. These ISPs generally have a website that lists broadband rates and coverage areas – but for whatever reason, these ISPs do not participate in the FCC mapping database.

My guess in most cases is that these small ISPs don’t think they are required to report – they either don’t even know about the database, or they don’t fear any repercussions for not reporting. These are generally small single-owner or family businesses, and the owners might think that broadband isn’t regulated. Some of these ISPs have operated for years, and nobody has ever knocked on their doors about regulation, so they remain either blissfully unaware of their obligation to report or they don’t think it is important.

Another category that often doesn’t report is local governments that provide the fiber connectivity to their own buildings and sometimes to a few key businesses in town. These are not always small, and there are municipal networks in larger cities that are not included in the FCC database. Many cities don’t think they are ISPs even if they perform all of the ISP functions. They provide bandwidth to and from the Internet using facilities that they have built to connect to users. In some cases, there is an underlying ISP serving the city, but often there is not. Another similar category is school networks that buy wholesale bandwidth and do all of the ISP functions.

These local governments are doing themselves a disfavor by not reporting because their government buildings are not listed as being served by fiber. That could open up the door for some other ISP to ask for grant funding to serve the anchor institutions in the region that are already served.

Another interesting group of ISPs that often doesn’t report to the FCC is companies that buy wholesale loops from an open-access or leased loop environment. Generally, these loops are pure transport, and the ISP has to handle the functions of routing traffic to and from the Internet. These folks also often don’t think they are ISPs because they don’t own the fiber loop – but the entity that performs the ISP functions for a customer is the ISP and should be reporting to the FCC. These are often small companies that tackle being an ISP as a sideline business and I would guess they don’t think they are regulated.

The group that mystifies me the most are some of the big national ISPs. There are ISPs who have nationwide contracts to serve all branches of national chains like hotels, banks, etc. In a city of 20,000 or larger, there are often a half dozen such ISPs serving one or more businesses. But I regularly find that a few big ISPS are not reporting to the FCC. I’ve always wondered if some other big ISP includes these customers in its reporting, but when I look at the granular data, it often looks like many of the national chains served by fiber are not claimed by any ISP. The new FCC mapping is going to get a lot more granular and maybe we’ll finally be able to see if such connections are reported by somebody.

Adding together all of the ISPs that don’t report is likely only a minuscule sliver of the ISP market. However, these are often some of the most important connections in a city since they are the customers served with fiber. A small-city fiber network might be bringing multi-gigabit broadband to city buildings or a handful of businesses, and nobody knows about it.

I don’t know that the FCC has any hope of uncovering these small ISPs, and it’s not worth the investigative effort to identify them. But at least part of the blame for this lies at the FCC. The agency doesn’t have clear guidelines in plain English defining who is an ISP, with examples. But it might not help even if the FCC did, since it seems that many small ISPs barely know the FCC exists.

Should Grant Networks Allow High Prices?

I wrote a blog yesterday about a grant application filed in Nebraska by AMG Technology Investment Group (Nextlink Internet). This is one of the companies that won the RDOF reverse auction at the FCC but is still waiting to hear if it will be awarded the FCC subsidy funding.

One of the things that caught my eye on the grant request was the proposed broadband rate. Nextlink is proposing a rate of $109.95 for a 2-year contract for 100/100 Mbps. I have to assume that the rate without a 2-year contract is even higher – or maybe a customer can’t buy broadband for less than a 2-year commitment.

Today’s blog asks the question – should higher-than-market rates be allowed on a network that is being subsidized with public funding? This is not the first time I’ve seen a rate that high, and I can recall at least two other RDOF winners planning on basic rates of at least $100. One example is Starlink, which also has not yet been approved by the FCC for RDOF and which has a $110 rate.

I don’t think there is any question that a $110 rate is higher than the market. Should an agency that awards grants or other broadband subsidies somehow insist that broadband rates are somehow tied to market rates? That’s a lot harder question to answer than you might think because the question implies that these agencies have the power to regulate or cap broadband prices in grant areas.

The Ajit Pai FCC voluntarily gave away the right for the FCC to regulate broadband rates when it gave up Title II authority. It’s not clear if that decision has any bearing on other federal agencies that award grants like NTIA, EDA, and USDA. Can these federal agencies insist on affordable rates for ISPs that take federal funding? If not, can the agencies at least consider rates when deciding who gets grant funding – can these agencies assign fewer qualifying grant points to somebody with a $100 basic rate compared to somebody with a $50 rate?

I think we got a hint that considering rates is probably allowed since Congress made it clear with the BEAD legislation that the NTIA has no authority to regulate rates – this implies that without that specific Congressional mandate that the NTIA might have had that authority. But even the specific BEAD edict might not mean that rates can’t be considered in BEAD grants.

It’s an even fuzzier question if a State has the right to set rates. There have always been two schools of thought about the scope of State versus Federal authority in terms of regulating broadband. I’ve heard it argued that a State’s right to regulate broadband rolls downhill from the federal ability to regulate. If you believe in this philosophy, then a State’s right to regulate broadband rates was severely weakened when the FCC gave up its rights. But I’ve also heard just the opposite argued – that a State has the right to step into any regulatory void left by federal regulators. We recently saw this concept in action when courts recently upheld California’s right to implement net neutrality rules after the FCC washed its hands of such authority. If you accept this view of regulation, a State can tackle rate regulation if the FCC refuses to do so.

To be fair to Nextlink, the company also offers less expensive broadband rates. Its fixed wireless products, rates start at $69.95 for a 15 Mbps download connection. Fiber prices start at $49.99 for a 25 Mbps download speed. But these lower rates for slower speeds raise more questions for me. Many of the current broadband grants require building networks that can deliver at least 100/100 Mbps broadband. Should an ISP be able to use a grant-funded network to offer anything slower? The whole point of these grant programs is to bring faster broadband across America. Should a network that is funded with public money be allowed to set slower speeds for the most affordable options? If so, it’s hard to argue that the ISP is delivering 100/100 Mbps broadband everywhere. If the agencies awarding grants can’t demand affordable rates, perhaps they can demand that 100/100 Mbps is the slowest product that can be offered on a grant-subsidized network. Nobody is forcing ISPs to accept grant funding and other subsidies, but when they elect to take public money, it seems like there can be strings attached.

I also wonder if ISPs benefitting from a grant-subsidized network ought to have the ability to force customers into long-term contracts? It’s not hard to make the case that the public money paying for the network should justify public-friendly products and practices.

As a final note, this topic highlights another glaring shortfall of awarding subsidies through a reverse auction rather than through grants. With RDOF, the reverse auction determined the winner of the subsidy first, and then the FCC proceeded to find out the plans of the subsidy winners. There were no pre-determined rules for issues like rates that an RDOF winner was forced to accept as part of accepting the public money. Let’s not do that again.

The FCC Tackles Pole Replacements

In March, the FCC issued a Second Further Notice of Proposed Rulemaking FCC 22-20 that asks if the rules should change for allocating the costs of a pole replacement that occurs when a new carrier asks to add a new wire or device onto an existing pole. The timing of this docket is in anticipation of a huge amount of rural fiber construction that will be coming as a result of the tsunami of state and federal broadband grants.

The current rules push the full cost of replacing a pole onto the entity that is asking to get onto the pole. This can be expensive, and is one of the factors that make it a challenge for a fiber overbuilder or a small cell site carrier to get into poles.

There are several reasons why a pole might need to be replaced to accommodate a new attacher:

  • The pole might be completely full, and there is no room for the new attacher. There are national safety standards that must be met for the distance between each attacher on a pole – these rules are intended to make it safe for technicians to work on or repair cables. There is also a standard for the minimum clearance that the lowest attacher must be above the ground – a safety factor for the public.
  • The new attacher might be adding additional weight or wind resistance to a pole – there is a limit on how much weight a pole should carry to be safe. Wind resistance is an important factor since there is a lot of stress put onto poles when heavy winds push against the wires.

This docket was prompted in 2020 when the NCTA – the Internet and Television Association filed a petition asking that pole owners pay a share of pole replacement costs. The petition also asked for an expedited review process of pole attachment complaints between carriers.

NCTA makes some valid points. Many existing poles are in bad shape, and the new attacher is doing a huge favor for the pole owner if it pays for poles that should have been replaced as part of regular maintenance. Anybody who works in multiple markets knows of places where almost all of the existing poles are in bad shape and should be replaced by the pole owner. The FCC labels such poles as already being out of compliance with safety and utility construction standards and asks if it’s fair for a new attacher to pay the full cost of replacement. The FCC is asking if some of the costs of a replacement should be allocated to the pole owner and existing attachers in addition to the new attacher.

Not surprisingly, both AT&T and Verizon have responded to this docket by saying the current cost allocation processes are fine and shouldn’t be changed. This is not an unexpected response for two reasons. First, these two companies probably have more miles of cable on existing poles than anybody else, and they do not want to be slapped with paying a share of the cost of replacing poles from new attachers. More importantly, the big telcos have always favored rules that slow down construction for competitors – pole attachment problems can bring a fiber construction project to a screeching halt.

In contrast, INCOMPAS filed comments on behalf of fiber builders. INCOMPAS said that pole attachment issues might be the single most important factor that will stop the federal government from meeting its goals of connecting everybody to broadband. INCOMPAS says that the extra costs for pole replacement in rural areas can sink a fiber project.

As usual with a regulatory question, the right answer is somewhere in the middle of the extremes. It is unfair to force a new attacher to pay the full cost to replace a pole that is already in bad shape. Pole owners should have an obligation to do regular maintenance to replace the worst poles in the network each year – and many have not done so. It’s also fair, in some circumstances, for the existing attachers to pay a share of the pole replacement when existing attachments are in violation of safety rules. And, if we are going to build billions of dollars of new broadband networks as a result of grants, it makes sense for regulators to gear up for an expedited process of resolving disputes between carriers concerning poles.

A New Definition of Broadband?

FCC Chairman Jessica Rosenworcel has circulated a draft Notice of Inquiry inside the FCC to kick off the required annual report to Congress on the state of U.S. broadband. As part of preparing that report, she is recommending that the FCC adopt a new definition of broadband of 100/20 Mbps and establish gigabit broadband as a longer-term goal. I have a lot of different reactions to the idea.

First, the FCC is late to the game since Congress has already set a speed of 100/20 Mbps for the BEAD and other federal grant programs. This is entirely due to the way that the FCC has become totally partisan. Past FCC Chairman Ajit Pai was never going to entertain any discussion of increasing the definition of broadband since he was clearly in the pocket of the big ISPs. The FCC is currently split between two democrats and two republicans, and I find it doubtful that there can be any significant progress at the FCC on anything related to broadband in the current configuration. I have to wonder if the Senate is ever going to confirm a fifth commissioner – and if not, can this idea go anywhere?

Another thought that keeps running through my mind is that picking any speed as a definition of broadband is completely arbitrary. We know in real life that the broadband speed to a home changes every millisecond, and speed tests only take an average of the network chaos. One of the things we found out during the pandemic is that jitter might matter more than speed. Jitter measures the variability of the broadband signal, and a customer can lose connectivity on a network with high jitter if the speed drops too low, even for a few milliseconds.

I also wonder about the practical impact of picking a definition of speed. Many of the current federal grants define a served customer as having an upload speed of at least 20 Mbps. It’s clear that a huge number of cable customers are not seeing 20 Mbps upload speeds, and I have to wonder if any State broadband offices will be brave enough to suggest using federal grant funding to overbuild a cable company. If not, then a definition of broadband as 20 Mbps upload is more of a suggestion than a rule.

Another issue with setting definitions of speed is that any definition of speed will define some technologies as not being broadband. That brings a lot of pressure from ISPs and manufacturers of these technologies. This was the biggest problem with the 25/3 Mbps and DSL. While it is theoretically possible to deliver 25/3 Mbps broadband on a single copper wire, the big telcos spent more than a decade claiming to meet speeds that they clearly didn’t and couldn’t deliver. We’re seeing the same technology fights now happening with a 100/20 Mbps definition of broadband. Can fixed wireless or low orbit satellite technology really achieve 100/20 Mbps?

Another issue that has always bothered me about picking a definition of broadband is that the demand for speed has continued to grow. If you define broadband by the speeds that are needed today, then that definition will soon be obsolete. The last definition of broadband speed was set in 2015. Are we going to wait another seven years if we change to 100/20 Mbps this year? If so, the 100/20 Mbps definition will quickly become as practically obsolete as happened with 25/3.

Finally, a 100/20 Mbps speed is already far behind the market. Most of the big cable companies have recently declared their basic broadband download speed to be 200 Mbps. How can you set a definition of broadband that has a slower download speed than what is being offered to at least 65% of the households in the country? One of the mandates given to the FCC in the Telecommunications Act of 1996 was that rural broadband ought to be in parity with urban broadband. Setting a definition of broadband only matters for customers who don’t have access to good broadband. Do we really want to use federal money in 2022 to build 100 Mbps download broadband when a large majority of the market is already double that speed today?

Trying to define broadband by a single speed is a classical Gordian knot – a problem that can’t be reasonably solved. We can pick a number, but by definition, any number we choose will fail some of the tests I’ve described above. I guess we have to do it, but I wish there was another way.

Will Broadband Labels Do Any Good?

The FCC is still considering the use of broadband labels that are supposed to explain broadband to customers. This sounds like a really good idea, but I wonder if it’s really going to be effective?

Some of the items included on the FCC sample label are great. The most important fact is the price. It has become virtually impossible to find broadband prices for many ISPs. Many ISP, including the largest ones, only show special pricing online that applies to new customers. These ISPs show the public the sale prices, but it’s often impossible to know the list prices. It’s often the same if somebody calls an ISP – they’ll be offered different promotional packages, but it’s like pulling teeth to get the truth about the everyday price that kicks in at the end of a promotion.

I’m curious about how the broadband labels will handle bundling. The surveys we’ve done recently show that half or more of homes in many markets are still buying a bundle that might include broadband plus voice, cable TV, security, smart home, or cellular. Big ISPs have never wanted to disclose the cost of individual products inside of a bundle and I can’t wait to see how ISPs handle a bundled broadband product.

There are also hidden fees and other ways to disguise the real price. Disclosing pricing will be a huge breath of fresh air – if ISPs are forced to be totally honest. I can imagine the PR and marketing groups at the bigger ISPs are already agonizing over how to disclose pricing while still keeping it cloudy and mysterious.

More perplexing is the broadband speed issue. The sample label that the FCC circulated for comment would require ISPs to list the typical download speeds, typical upstream speeds, typical latency, and typical packet loss. What does typical mean? Consider a Comcast market where the company sells residential broadband that ranges between grandfathered 50 Mbps and 1.2 Gbps. What is the typical speed in that market? How will any consumer be able to judge what a typical speed means?

I’ve written about broadband speeds a lot, and for many technologies, the speeds vary significantly for a given customer during the day. What’s the typical broadband speed for a home that sees download speeds vary by 50% during a typical day? I don’t always want to come across as skeptical, but I’m betting that the big cable companies will list the marketing speeds of their most popular broadband product and call it typical. Such a number is worthless because it’s what customers are already being told today. I don’t have a proposed solution for the various speed dilemmas, but I fear that whatever is told to customers will be largely uninformative.

What will the typical consumer do when told the typical latency and packet loss? It’s hard to think many homes will understand what those terms mean or what the typical values mean.

ISPs are also supposed to disclose network management processes. Does this mean a cable company must be truthful and tell some neighborhoods that their coaxial cable is too old and needs to be replaced – because that is s specific network practice? Will a cable company tell a customer that their neighborhood node is oversubscribed, which accounts for slowdowns at peak times? I’m guessing the network management processes will be described at the total market level instead of at the neighborhood level – again, making them largely uninformative.

I’m also curious how the FCC will know if customers are being told the truth. Folks who read this blog might tell the FCC if a broadband label is deceptive or wrong – but what is the FCC going to do with such complaints? Broadband issues are often hyper-local, and what happens on my block might be different than somebody living just a few blocks away.

I want to be clear that I am not against the broadband labels. Forcing ISPs to be public with prices is long overdue, as long as they disclose the truth. But I’m skeptical about many other things on the labels, and I fear big ISPs will use the labels as another marketing and propaganda tool instead of disclosing what people really need to know.