U.S. Technology Competitiveness

There is a bipartisan bill moving through Congress that has the goal of assessing U.S. leadership in key technologies relative to competitors like China. The bill is called The American Technology Leadership Act. The bill is sponsored by U.S. Senators Michael Bennet (D-Colo.), Ben Sasse (R-Neb.), and Mark Warner (D-Va.).

The Act would create an Office of Global Competition Analysis to assess how the United States fares in key emerging technologies relative to other countries to inform policy and strengthen U.S. competitiveness. This new office would be staffed by experts from the Departments of Commerce, Treasury, and Defense, along with the Intelligence Community, and other relevant agencies. The new Office could also draw on experts from the private sector and academia on a project basis. The new Office would support both economic and national security policymakers. The effort would be used to assess U.S. competitiveness in areas like semiconductors and artificial intelligence.

When I read that, my first reaction was that this sounds totally sensible. In a world that has shifted to a global economy, it sounds smart to take steps to make sure that the U.S. has access to the best technologies. We learned during the aftermath of the pandemic how fragile we are to the vagaries of the supply chain or other economic disruptions. It makes sense to have something like a Department of the Future that is tasked with thinking about how new technologies will change our economy and security.

But then what? Is there anything the government can do to change the way that new technology affects the world and the U.S.? A new technology like artificial intelligence is not something that can be curated or directed. The big breakthroughs are likely to come from four smart people in a university lab from somewhere on the planet who strike upon the right solution. The idea of any government somehow controlling the development of any new technology is ludicrous. I would venture to say that the smartest people in the artificial intelligence field have no idea where or when the big breakthroughs will happen. New technologies invariably take a different path than anybody ever anticipated.

The U.S. doesn’t have the best track record of trying to direct new technologies. Just in recent decades, I can recall when Bill Clinton was going to establish the U.S. as the world leader in microtechnology. I remember a big federal push to make the U.S. the world leader in solar power. And folks in our industry all remember the boondoggle that was the government’s big push for winning the 5G war with China.

The 5G story is a cautionary tale about why governments should not meddle with or back new technologies. Seemingly the whole federal government, from the White House, the Congress, and the FCC became convinced that 5G was going to revolutionize the world and that the U.S must have a major role in the development of the technology. It went so far as having real discussions about the U.S. government buying Nokia or Ericsson.

It turns out that the whole 5G fiasco was mostly the runaway result of hard lobbying by the big cellular carriers to get more spectrum. Of course, the carriers were also hopeful of tax breaks and handouts to help them towards the 5G future.

And that highlights the real danger of the government trying to meddle with technology. Studying trends and assessing the impacts of technologies sounds sensible. Making sure the U.S. has a source in the supply chain for new technologies sounds like a great idea after a new technology reaches the market. But it never turns out well when any government goes further. The government should not be in the business of picking winners and losers. Doing so ends invariably leads to huge government handouts for the corporations involved. But such funding is likely to never produce the intended results – because the worldwide economy and industries are the ultimate ones that can turn new technologies into something useful – and government interference in that process is much more likely to hinder than help new technologies.

Should Grant Networks Allow High Prices?

I wrote a blog yesterday about a grant application filed in Nebraska by AMG Technology Investment Group (Nextlink Internet). This is one of the companies that won the RDOF reverse auction at the FCC but is still waiting to hear if it will be awarded the FCC subsidy funding.

One of the things that caught my eye on the grant request was the proposed broadband rate. Nextlink is proposing a rate of $109.95 for a 2-year contract for 100/100 Mbps. I have to assume that the rate without a 2-year contract is even higher – or maybe a customer can’t buy broadband for less than a 2-year commitment.

Today’s blog asks the question – should higher-than-market rates be allowed on a network that is being subsidized with public funding? This is not the first time I’ve seen a rate that high, and I can recall at least two other RDOF winners planning on basic rates of at least $100. One example is Starlink, which also has not yet been approved by the FCC for RDOF and which has a $110 rate.

I don’t think there is any question that a $110 rate is higher than the market. Should an agency that awards grants or other broadband subsidies somehow insist that broadband rates are somehow tied to market rates? That’s a lot harder question to answer than you might think because the question implies that these agencies have the power to regulate or cap broadband prices in grant areas.

The Ajit Pai FCC voluntarily gave away the right for the FCC to regulate broadband rates when it gave up Title II authority. It’s not clear if that decision has any bearing on other federal agencies that award grants like NTIA, EDA, and USDA. Can these federal agencies insist on affordable rates for ISPs that take federal funding? If not, can the agencies at least consider rates when deciding who gets grant funding – can these agencies assign fewer qualifying grant points to somebody with a $100 basic rate compared to somebody with a $50 rate?

I think we got a hint that considering rates is probably allowed since Congress made it clear with the BEAD legislation that the NTIA has no authority to regulate rates – this implies that without that specific Congressional mandate that the NTIA might have had that authority. But even the specific BEAD edict might not mean that rates can’t be considered in BEAD grants.

It’s an even fuzzier question if a State has the right to set rates. There have always been two schools of thought about the scope of State versus Federal authority in terms of regulating broadband. I’ve heard it argued that a State’s right to regulate broadband rolls downhill from the federal ability to regulate. If you believe in this philosophy, then a State’s right to regulate broadband rates was severely weakened when the FCC gave up its rights. But I’ve also heard just the opposite argued – that a State has the right to step into any regulatory void left by federal regulators. We recently saw this concept in action when courts recently upheld California’s right to implement net neutrality rules after the FCC washed its hands of such authority. If you accept this view of regulation, a State can tackle rate regulation if the FCC refuses to do so.

To be fair to Nextlink, the company also offers less expensive broadband rates. Its fixed wireless products, rates start at $69.95 for a 15 Mbps download connection. Fiber prices start at $49.99 for a 25 Mbps download speed. But these lower rates for slower speeds raise more questions for me. Many of the current broadband grants require building networks that can deliver at least 100/100 Mbps broadband. Should an ISP be able to use a grant-funded network to offer anything slower? The whole point of these grant programs is to bring faster broadband across America. Should a network that is funded with public money be allowed to set slower speeds for the most affordable options? If so, it’s hard to argue that the ISP is delivering 100/100 Mbps broadband everywhere. If the agencies awarding grants can’t demand affordable rates, perhaps they can demand that 100/100 Mbps is the slowest product that can be offered on a grant-subsidized network. Nobody is forcing ISPs to accept grant funding and other subsidies, but when they elect to take public money, it seems like there can be strings attached.

I also wonder if ISPs benefitting from a grant-subsidized network ought to have the ability to force customers into long-term contracts? It’s not hard to make the case that the public money paying for the network should justify public-friendly products and practices.

As a final note, this topic highlights another glaring shortfall of awarding subsidies through a reverse auction rather than through grants. With RDOF, the reverse auction determined the winner of the subsidy first, and then the FCC proceeded to find out the plans of the subsidy winners. There were no pre-determined rules for issues like rates that an RDOF winner was forced to accept as part of accepting the public money. Let’s not do that again.

Please Make Grant Applications Public

Most broadband grant programs do not publish open grant applications for the public to see. But we are in a time when an ISP that is awarded funding for bringing a new broadband network is likely to become the near-monopoly ISP in a rural area for a decade or two to come. The public ought to get to see who is proposing to bring them broadband so that these decisions are not made behind closed doors.

One of the interesting things about writing this blog is that people send me things that I likely would never see on my own. It turns out that the Nebraska Public Service Commission posts grant applications online. I think that every agency awarding last-mile grant funding should be doing the same.

The particular grant application that hit my inbox is from AMG Technology Investment Group (Nextlink Internet). This grant seems to be asking for state funding in the same or nearby areas where Nextlink won the RDOF auction. The FCC hasn’t yet made that RDOF award to Nextlink almost 20 months later.

The person who sent me the grant application wanted to point out inconsistencies and that the application didn’t seem to be complete. I’m not sure that’s unusual. One state grant office told me recently that they outright reject about half of all grant applications for being incomplete. The email to me included a number of complaints. For example, they thought there is an inconsistency since this grant asks to fund a 100/100 Mbps network when the speed promised for RDOF was symmetrical gigabit. They were dismayed that the grant application didn’t include a specific network design.

The point of this blog is not to concentrate on this particular grant application but to point out that letting the public see grants can raise the kind of questions that ended up in my inbox. I have no knowledge of the Nebraska PSC grant program or its processes. The PSC might routinely ask a grant applicant to fill in any missing gaps, and for all I know, they may have already asked questions of Nextlink. The point of today’s blog is that allowing the public to see grant requests can prompt interesting observations and questions like the ones sent to me. Certainly, not all public input will be valid, but there can be issues raised by the public that a grant office might not otherwise hear.

I’d like to praise the Nebraska PSC for putting the grant application online. In most state grant programs, the broadband grant requests are never shown to the public – even after they are awarded. At most, grant offices might publish a paragraph or two from the executive summary of a grant request.

I talked to several grant offices about this issue, and they told me that they are not comfortable disclosing financial information about a grant applicant. That’s a valid concern, but a grant application can easily be structured so that financial information is in a separate attachment that could be kept confidential if requested by the applicant. I would note that some grant applicants I work with like electric cooperatives would welcome disclosing everything as a way to compare them with other applicants.

I don’t think there is any question that the public wants to see grant requests from the companies that are vying to become the new dominant ISP in the community. Communities ought to have a chance to weigh in against an ISP they don’t want, against a technology they don’t want, or to weigh-in in favor of a particular ISP if there are multiple ISPs asking for funds for the same geographic footprint.

Letting the public see grant requests is also a way to fact-check ISPs. Most states will tell you that the folks reviewing broadband grants often don’t have a lot of experience with the inner workings of ISPs. This means that it is easy for an ISP to snow a grant reviewer with misleading statements that an experienced reviewer would catch immediately. ISPs will be less likely to make misleading claims if they think the public will call them out and threaten the chances of winning the grant.

I know that publishing grant requests can open a whole new can of worms and extra work for a grant office. But I think the extra public scrutiny is healthy. I would think a grant office would want to know if false or misleading claims are made in a grant request. On the flip side, a grant office will benefit by knowing if the public strongly supports a grant request. Shining light on the grant process should be overall a positive thing. It’s a good check against awarding grants that aren’t deserved. But it’s also a way to make sure that grant offices are being fair when picking winners.

Small ISPs and the ACP

I’ve recently talked to several small ISPs who are having trouble navigating the FCC’s Affordable Care Program (ACP). These ISPs are wondering if they should drop their participation. This is the program that gives a $30 monthly discount to customers who enroll in the plan through their ISP. The program is administered by USAC which also administers the various Universal Service Fund programs.

The stories I’ve heard from these ISPs show that the program is challenging to use and slow to reimburse ISPs. There is no one major specific complaint about the administration of the program but a string of problems. Consider some of the following (and the list of complaints is much longer):

  • The rules are overly complex. As an example, an ISP must have different staff assigned to four functions – an Administrator, Operations, Analyst, and Agent. It turns out that various tasks can only be performed by one of these positions – something not explained in the rules.
  • There doesn’t seem to be any training available to ISPs joining the program. Instead, ISPs have to wade through the 166-page FCC rulemaking that created the ACP program. The FCC says there have been over 700 training sessions for people on how to enroll new end-user customers, but the ISPs I talked to couldn’t find any online resources for explaining the program from the ISP perspective – no videos or no frequently asked questions helping ISPs figuring out how to get reimbursed from the program. .
  • The ACP system returns unhelpful error messages when something doesn’t work. A common error message is “Your user name doesn’t seem to exist” which is returned for a variety of online problems encountered by people who are logged into the system and clearly have valid user IDs. Error messages for any online system ought to be worded to tell a user what they did wrong. For example, an error message that says, “This function can only be done by an Analyst” would help an ISP figure out the problem.
  • There is a hotline for ISPs, but unfortunately, the folks manning the hotline can’t answer even basic questions about the online system and refer a caller to the written rules. It’s obvious that the people answering the calls have never navigated through the system.
  • One ISP had been in the system for a while and found out it wouldn’t be paid for the discounts given to customers since the ISP hadn’t submitted a customer’s last four digits of a social security number. This doesn’t make sense since the FCC had ruled that the SSN is not needed to enroll a customer – the ACP rules allow for numerous other forms of identification. Customers didn’t need to input an SSN number to join the ACP, and the ISP never asked for them. They are now wondering if they will ever get reimbursed for these claims.
  • There is a disconnect between customer approval and the ISP portal. Customers are told through the customer portal that they are successfully enrolled in the ACP program, but when an ISP asks for reimbursement, it is often told that it must provide more identification to get reimbursed. In this situation, the customer is already getting the discount while the ISP is not yet eligible for reimbursement and will end up eating the customer discount.

Overall, these ISPs told me that navigating the system and the rules is a major disincentive for them to participate in the ACP.

Why are these kinds of issues problematic for smaller ISPs? Bigger ISPs can assign a team to a program like this and give them enough time to figure out the nuances. Small ISPs have tiny staffs, particularly in the backoffice. Small ISPs can’t devote the many hours and days needed to solve the ACP puzzle. The small ISPs I’ve heard from are wondering why they are even bothering with ACP. The program is not bringing new customers but mostly is giving discounts to existing customers. There is no reimbursement for the hours the ISPs spend learning the system or navigating it each month. After all of the hassle, the ISPs are not receiving full reimbursement in every case, and even when they do, the payments are slow. ISPs have also heard through the grapevine that they will eventually be audited to make sure there is no fraud – anybody who has been through this kind of audit shudders at the idea.

Everything I read says that most of the discounts for ACP are being claimed by cellular resellers and not facility-based ISPs. I don’t know if that is finally changing, but if this isn’t made easier for ISPs, it’s likely that many ISPs will drop out or stop accepting additional ACP customers. The final issue ISPs worry about is that the program is only funded for perhaps two more years. They worry about the impact on their business if the program ends abruptly.

To be fair, any new online system has bugs. But ACP was launched in January and replaced the similar EBB program. We are now far past the initial launch window, and nobody seems to be working to make the system usable. The FCC wants to brag about how well ACP is doing, but they need to put some effort into making this worth the effort for ISPs.

The Proliferation of Microtrenching

There is an interesting new trend in fiber construction. Some relatively large cities are getting fiber networks using microtrenching. Just in the last week, I’ve seen announcements of plans to use microtrenching in cities like Mesa, Arizona, and Sarasota Springs, New York. In the past the technology was used for new fiber networks in Austin, Texas, San Antonia, Texas, and Charlotte, North Carolina.  I’ve seen recent proposals made to numerous cities to use microtrenching to build new fiber networks.

Microtrenching works by cutting a narrow cut an inch or two wide and up to a foot deep for the placement of fiber cables. The trench is then sealed with a special epoxy that is supposed to bind the hole to be as strong as before the cut.

Microtrenching got a bad name a few years back when Google Fiber walked away from a botched microtrenched network in Louisville, Kentucky. The microtrenching method used allowed water to seep into the narrow trenches, and the freezing and thawing during the winter caused the plugs and the fibers to heave from the small trenches. The vendors supporting the technology say they have solved the problems that surfaced in the Louisville debacle.

There is no doubt that microtrenching is faster than the more traditional method of boring and placing underground conduit. A recent article cited Ting as saying that a crew can microtrench 3,000 feet of fiber per day compared to 500 feet with traditional boring. Since a big part of the cost of building a network is labor, that can save a lot of money for fiber construction.

I’ve worked with cities that have major concerns about microtrenching. A microtrench cut is generally made in the street just a few inches from the curb. Cities worry since they have to routinely cut the streets in this same area to repair water leaks or to react to gas main leaks. In many cases, such repair cuts are made hurriedly, but even if they aren’t, it’s nearly impossible to dig down a few feet with a backhoe and not cut shallow fiber. This means a fiber outage every time a city or a utility makes such a cut in the street, with the outage likely lasting from a few days to a few weeks.

The bigger concern for cities is the durability of the microtrenched cuts. Even if the technology has improved, will the epoxy plug stay strong and intact for decades to come? Every city engineer gets nervous seeing anybody with plans to make cuts in fairly pristine city streets.

City engineers also get nervous when new infrastructure is placed at a depth they don’t consider as ideal. Most cities require that a fiber network be placed three feet or deeper below other utilities like water and gas. They understand how many cuts are made in streets every year, and they can foresee a lot of problems coming with a fiber network that gets regularly cut. City engineers do not want to be the ones constantly blamed for fiber outages.

There are new techniques that might make microtrenching less worrisome. In Sarasota Springs, New York, SiFi is microtrenching in the greenways – the space between the curb and the sidewalks. The company says it has a new technique to be able to feed fiber under and around tree roots without harming them, thus minimizing damage to tree while avoiding using the city streets. This construction method doesn’t sound as fast as microtrenching at full speed down a street, but it seems like a technique that would eliminate most of the worries of the civil engineers – assuming it really doesn’t kill all the trees.

It probably will take some years to find out in a given city if microtrenching was a good solution. The willingness to take a chance demonstrates how badly cities want fiber everywhere – after all, civil engineers are not known as risk takers. I have to imagine that in many cases that the decision to allow microtrenching is being approved by somebody other than the engineers.

Planning for Churn

One of the factors that need to be considered in any business plan or forecast is churn – which is when customers drop service. I often see ISPs build business plans that don’t acknowledge churn, which can be a costly oversight.

There is a maxim among last-mile fiber networks that nobody ever leaves fiber to go back to a cable company network. That’s not entirely true, but it’s a recognition that churn tends to be lower on a last-mile fiber network than with other technologies. But customers leave fiber networks. Customers might die or move away. Customers might hit hard economic times and be unable to afford the connection.

I wrote a recent blog that asked if broadband is recession-proof. That was really asking if customers drop broadband when they lose jobs or see household income drop. The reality is that some folks have no choice but to drop fiber if things get tough enough. I’ve read several recent articles talking about how inflation in rents is likely to drive a few million people to become homeless – that might mean moving in with somebody else or becoming truly homeless, and broadband goes with everything else in these circumstances.

Churn varies a lot by community, and an ISP considering a new market should research the relocation rate. About 9.8% of all households move every year, or just over 15 million households. The percentage of people who move annually has declined steadily since the 1960s, when the rate was twice the current level. Renters move a lot more often than homeowners – In recent years, almost 22% of renters relocate each year compared to 5.5% of homeowners. ISPs all know that renters don’t only live in homes, and in many communities, a significant percentage of homes are rented. Younger families tend to move a lot more often than older ones. Only about 1% of households move between states each year. About 16% of military families move every year.

Every ISP has customers who die every year. Pre-pandemic, around 2.7 million Americans were dying each year. During the pandemic, in 2020 and 2021, that leaped to around 3.4 million people each year.

Churn can be a big challenge for an ISP. It turns out that most people call to arrange electric, water, and broadband services before they show up in a new community – and in doing so, they most naturally call the incumbents. Somebody new to a town likely won’t know about a smaller or local ISP. Since most people come from communities with little or no competition, they don’t even know it’s possible to use an ISP other than the big incumbents.

Churn can be expensive. There is an obvious loss of revenue when a customer leaves. More insidious is the stranded investment in drops and installation costs that are no longer generating revenue to cover the investment cost. One of the most surprising things that fiber-ISPs often find is that they must continue to spend money on selling and new installations each year just to stand still with the penetration rate.

I’ve seen ISPs with interesting strategies for dealing with churn. I have one ISP client in a college community that hangs out at the university with a booth and a sign that says gigabit internet. College students know what that means, and the ISP has been successful in maintaining a good penetration in off-campus housing. I have clients who pay commissions to real estate agents who refer new homeowners to them. It’s fairly routine to have arrangements with landlords and rental agents to have them get the word out about a broadband alternative to the incumbents.

Churn is one of the details of operating an ISPs that many new ISPs don’t get for a while. But it’s vital to have a strategy. It’s far cheaper to somehow catch a new customer when they move to town. It’s far less costly to catch the new tenant moving into a building that always has a drop.

The FCC Tackles Pole Replacements

In March, the FCC issued a Second Further Notice of Proposed Rulemaking FCC 22-20 that asks if the rules should change for allocating the costs of a pole replacement that occurs when a new carrier asks to add a new wire or device onto an existing pole. The timing of this docket is in anticipation of a huge amount of rural fiber construction that will be coming as a result of the tsunami of state and federal broadband grants.

The current rules push the full cost of replacing a pole onto the entity that is asking to get onto the pole. This can be expensive, and is one of the factors that make it a challenge for a fiber overbuilder or a small cell site carrier to get into poles.

There are several reasons why a pole might need to be replaced to accommodate a new attacher:

  • The pole might be completely full, and there is no room for the new attacher. There are national safety standards that must be met for the distance between each attacher on a pole – these rules are intended to make it safe for technicians to work on or repair cables. There is also a standard for the minimum clearance that the lowest attacher must be above the ground – a safety factor for the public.
  • The new attacher might be adding additional weight or wind resistance to a pole – there is a limit on how much weight a pole should carry to be safe. Wind resistance is an important factor since there is a lot of stress put onto poles when heavy winds push against the wires.

This docket was prompted in 2020 when the NCTA – the Internet and Television Association filed a petition asking that pole owners pay a share of pole replacement costs. The petition also asked for an expedited review process of pole attachment complaints between carriers.

NCTA makes some valid points. Many existing poles are in bad shape, and the new attacher is doing a huge favor for the pole owner if it pays for poles that should have been replaced as part of regular maintenance. Anybody who works in multiple markets knows of places where almost all of the existing poles are in bad shape and should be replaced by the pole owner. The FCC labels such poles as already being out of compliance with safety and utility construction standards and asks if it’s fair for a new attacher to pay the full cost of replacement. The FCC is asking if some of the costs of a replacement should be allocated to the pole owner and existing attachers in addition to the new attacher.

Not surprisingly, both AT&T and Verizon have responded to this docket by saying the current cost allocation processes are fine and shouldn’t be changed. This is not an unexpected response for two reasons. First, these two companies probably have more miles of cable on existing poles than anybody else, and they do not want to be slapped with paying a share of the cost of replacing poles from new attachers. More importantly, the big telcos have always favored rules that slow down construction for competitors – pole attachment problems can bring a fiber construction project to a screeching halt.

In contrast, INCOMPAS filed comments on behalf of fiber builders. INCOMPAS said that pole attachment issues might be the single most important factor that will stop the federal government from meeting its goals of connecting everybody to broadband. INCOMPAS says that the extra costs for pole replacement in rural areas can sink a fiber project.

As usual with a regulatory question, the right answer is somewhere in the middle of the extremes. It is unfair to force a new attacher to pay the full cost to replace a pole that is already in bad shape. Pole owners should have an obligation to do regular maintenance to replace the worst poles in the network each year – and many have not done so. It’s also fair, in some circumstances, for the existing attachers to pay a share of the pole replacement when existing attachments are in violation of safety rules. And, if we are going to build billions of dollars of new broadband networks as a result of grants, it makes sense for regulators to gear up for an expedited process of resolving disputes between carriers concerning poles.

Unlicensed Spectrum and BEAD Grants

There is a growing controversy brewing about the NTIA’s decision to declare that fixed wireless technology using only unlicensed spectrum is unreliable and not worthy of funding for the BEAD grants. WISPA, the lobbying arm for the fixed wireless industry, released a press release that says that the NTIA has made a big mistake in excluding WISPs that use only unlicensed spectrum.

I’m not a wireless engineer, so before I wrote this blog, I consulted with several engineers and several technicians who work with rural wireless networks. The one consistent message I got from all of them is that interference can be a serious issue for WISPs deploying only unlicensed spectrum. I’m just speculating, but I have to think that was part of the reason for the NTIA decision – interference can mean that the delivered speeds are not reliably predictable.

A lot of the interference comes from the way that many WISPs operate. The biggest practical problem with unlicensed spectrum is that it is unregulated, meaning there is no agency that can force order in a chaotic wireless situation. I’ve heard numerous horror stories about some of the practices in rural areas where there are multiple WISPs.  There are WISPs that grab all of the available channels of spectrum in a market to block out competitors. WISPs complain about competitors that cheat by rigging radios to operate above the legal power limit, which swamps their competitors. And bad behavior begets bad behavior in a vicious cycle where WISPs try to outmaneuver each other for enough spectrum to operate. The reality is that the WISP market using unlicensed spectrum is a free-for-all – it’s the Wild West. Customers bear the brunt of this as customer performance varies day by day as WISPs rearrange their networks. Unless there is only a single WISP in a market, the performance of the networks using unlicensed spectrum is unreliable, almost by definition.

There are other issues that nobody, including WISPA, wants to address. There are many WISPs that provide terrible broadband because they deploy wireless technology in ways that exceed the physics of the wireless signals. Many of these same criticisms apply to cellular carriers as well, particularly with the new cellular FWA broadband. Wireless broadband can be high-quality when done well and can be almost unusable if deployed poorly.

There are a number of reasons for poor fixed wireless speeds. Some WISPs are still deploying lower quality and/or older radios that are not capable of the best speeds – this same complaint has been leveled for years against DSL providers. ISPs often pile too many customers into a radio sector and overload it, which greatly dilutes the quality of the broadband that can reach any one customer. Another common issue is WISPs that deploy networks with inadequate backhaul. They will string together multiple wireless backhaul links to the point where each wireless transmitter is starved for bandwidth. But the biggest issue that I see in real practice is that some WISPs won’t say no to customers even when the connection is poor. They will gladly install customers who live far past the reasonable range of the radios or who have restricted line-of-sight. These practices are okay if customers willingly accept the degraded broadband – but typically, customers are often given poor broadband for a full price with no explanation.

Don’t take this to mean that I am against WISPs. I was served by a WISP for a decade that did a great job. I know high-quality WISPS that don’t engage in shoddy practices and who are great ISPs. But I’ve worked in many rural counties where residents lump WISPs in with rural DSL as something they will only purchase if there is no alternative.

Unfortunately, some of these same criticisms can be leveled against some WISPs that use licensed spectrum. Having licensed spectrum doesn’t overcome issues of oversubscribed transmitters, poor backhaul, or serving customers with poor line-of-sight or out of range of the radios. I’m not a big fan of giving grant funding to WISPs who put profits above signal quality and customer performance – but I’m not sure how a grant office would know this.

I have to think that the real genesis for the NTIA’s decision is the real-life practices of WISPs that do a poor job. It’s something that is rarely talked about – but it’s something that any high-quality WISP will bend your ear about.

By contrast, it’s practically impossible to deploy a poor-quality fiber network – it either works, or it doesn’t. I have no insight into the discussions that went on behind the scenes at the NTIA, but I have to think that a big part of the NTIA’s decision was based upon the many WISPs that are already unreliable. The NTIA decision means unlicensed-spectrum WISPs aren’t eligible for grants – but they are free to compete for broadband customers. WISPs that offer a high-quality product at a good price will still be around for many years to come.

A New Definition of Broadband?

FCC Chairman Jessica Rosenworcel has circulated a draft Notice of Inquiry inside the FCC to kick off the required annual report to Congress on the state of U.S. broadband. As part of preparing that report, she is recommending that the FCC adopt a new definition of broadband of 100/20 Mbps and establish gigabit broadband as a longer-term goal. I have a lot of different reactions to the idea.

First, the FCC is late to the game since Congress has already set a speed of 100/20 Mbps for the BEAD and other federal grant programs. This is entirely due to the way that the FCC has become totally partisan. Past FCC Chairman Ajit Pai was never going to entertain any discussion of increasing the definition of broadband since he was clearly in the pocket of the big ISPs. The FCC is currently split between two democrats and two republicans, and I find it doubtful that there can be any significant progress at the FCC on anything related to broadband in the current configuration. I have to wonder if the Senate is ever going to confirm a fifth commissioner – and if not, can this idea go anywhere?

Another thought that keeps running through my mind is that picking any speed as a definition of broadband is completely arbitrary. We know in real life that the broadband speed to a home changes every millisecond, and speed tests only take an average of the network chaos. One of the things we found out during the pandemic is that jitter might matter more than speed. Jitter measures the variability of the broadband signal, and a customer can lose connectivity on a network with high jitter if the speed drops too low, even for a few milliseconds.

I also wonder about the practical impact of picking a definition of speed. Many of the current federal grants define a served customer as having an upload speed of at least 20 Mbps. It’s clear that a huge number of cable customers are not seeing 20 Mbps upload speeds, and I have to wonder if any State broadband offices will be brave enough to suggest using federal grant funding to overbuild a cable company. If not, then a definition of broadband as 20 Mbps upload is more of a suggestion than a rule.

Another issue with setting definitions of speed is that any definition of speed will define some technologies as not being broadband. That brings a lot of pressure from ISPs and manufacturers of these technologies. This was the biggest problem with the 25/3 Mbps and DSL. While it is theoretically possible to deliver 25/3 Mbps broadband on a single copper wire, the big telcos spent more than a decade claiming to meet speeds that they clearly didn’t and couldn’t deliver. We’re seeing the same technology fights now happening with a 100/20 Mbps definition of broadband. Can fixed wireless or low orbit satellite technology really achieve 100/20 Mbps?

Another issue that has always bothered me about picking a definition of broadband is that the demand for speed has continued to grow. If you define broadband by the speeds that are needed today, then that definition will soon be obsolete. The last definition of broadband speed was set in 2015. Are we going to wait another seven years if we change to 100/20 Mbps this year? If so, the 100/20 Mbps definition will quickly become as practically obsolete as happened with 25/3.

Finally, a 100/20 Mbps speed is already far behind the market. Most of the big cable companies have recently declared their basic broadband download speed to be 200 Mbps. How can you set a definition of broadband that has a slower download speed than what is being offered to at least 65% of the households in the country? One of the mandates given to the FCC in the Telecommunications Act of 1996 was that rural broadband ought to be in parity with urban broadband. Setting a definition of broadband only matters for customers who don’t have access to good broadband. Do we really want to use federal money in 2022 to build 100 Mbps download broadband when a large majority of the market is already double that speed today?

Trying to define broadband by a single speed is a classical Gordian knot – a problem that can’t be reasonably solved. We can pick a number, but by definition, any number we choose will fail some of the tests I’ve described above. I guess we have to do it, but I wish there was another way.

Improving Network Resiliency

The FCC, in Docket FCC 22-50, is requiring changes that it hopes will improve the reliability and resiliency of cellular networks to be better prepared for and respond better to emergencies. The order cites recent emergencies like Hurricane Ida, the earthquakes in Puerto Rico, severe winter storms in Texas, and worsening hurricane and wildfire seasons. This makes me wonder if we might someday see similar requirements for ISPs and broadband networks.

The FCC wants to leverage the industry-developed Wireless Network Resiliency Cooperative Framework as a starting point for introducing new rules it is calling the Mandatory Disaster Response Initiative (MDRI).

The new rules first codify and make mandatory the existing voluntary industry framework to apply to all facility-based mobile wireless providers. That framework includes five principles: (1) providing for reasonable roaming under disaster arrangements when technically feasible; (2) fostering mutual aid among wireless providers during emergencies; (3) enhancing municipal preparedness and restoration by convening with local government public safety representatives to develop best practices and establishing a provider/PSAP contact database; (4) increasing consumer readiness and preparation through development and dissemination with consumer groups of a Consumer Readiness Checklist; and (5) improving public awareness and stakeholder communications on service and restoration status through FCC posting of data on cell site outages on a county-by-county basis.

The new rules require cellular network owners to regularly test its emergency capabilities. This is in response to network failures in some of the disasters mentioned| in the order where network owners were not prepared to deal with an emergency. The new order further requires cellular network owners to file a report with the FCC after every declared emergency to describe in detail how the carrier ended up responding to the emergency.

It’s a change that is overdue because, as the FCC notes, lives are dependent during an emergency on a functioning cellular network. It’s a shame that the FCC has to make such an order. There was a time when big carriers and telcos took social obligations like emergency preparedness seriously and took pride in the ability to respond to emergencies. I can recall decades ago how big telcos would publicize how quickly they were able to restore service, even after disasters completely destroyed central offices and networks.

But as the number of cellular carriers has grown and as the industry is getting more competitive and seeing lower margins, functions like emergency preparedness, that don’t contribute to the bottom line slowly slide through lack of attention and funding.

I suspect at some point that we’ll see similar rules for broadband networks. I’m aware of numerous examples in recent years where failures in the backhaul fiber network have isolated towns from the Internet. I’ve mentioned Project Thor in Colorado a few times, which is a municipally-driven initiative to connect cities in northwest Colorado by fiber, which was prompted by repetitive outages on CenturyLink backhaul networks that were killing Internet access for hospitals, 911 centers, and other public safety critical infrastructure.

One issue the order doesn’t address is that there are still large parts of rural America that have poor or nonexistent cellular coverage. The coverage maps of the big cellular carriers are a joke in much of rural America. My consulting firm does surveys, and it’s not unusual in rural counties to see 30% or more of residents claiming to have no cellular coverage at their home. For these folks, a broadband connection is their lifeline to the world in the way that a cellular connection is vital to others during and after an emergency.