New NTIA Grants

This is the year for unusual and unexpected broadband grant opportunities. The NTIA released a Notice of Funding Opportunity (NOFO) on May 21 for a broadband grant program it is labeling as the Broadband Infrastructure Program. The NTIA will be awarding grants for up to $288 million, with the funding provided from the $1.9 trillion American Rescue Plan Act.

This is an unusual grant program because the money is aimed entirely at public-private partnerships (PPPs). The applications must be submitted by a government entity, but the specific partner must be identified that will operate the broadband business. I can’t think of another grant program in the past that even favored PPPs, let alone one that is only available to PPPs. It’s going to be interesting to see if there are enough rural PPPs in existence to use all of the money.

The grants are holding to the firm definition that the money can only be used in places where speeds are less than 25/3 Mbps. This creates a huge dilemma if the NTIA is going to stick to the lousy FCC mapping data that incorrectly shows huge swaths of rural areas as having 25/3 Mbps broadband. One would hope that the NTIA will be open to accepting evidence that actual speeds are often far slower than what has been claimed by some telcos and WISPs. If not, it’s going to be hard to find rural areas that weren’t already covered by the RDOF grants.

The grants are like all current federal broadband grants and can’t be used where prior grants have already been awarded to an area but are not yet constructed. That’s going to create an interesting dilemma for some communities. There are some RDOF grant areas that are being heavily disputed, and which may not get awarded. The FCC also awarded grants to Viasat in last year’s incentive reverse auction and communities are rightfully upset that these places are not eligible to get fiber. There is growing concern about the pending RDOF awards made to Starlink.

The grants must propose an engineered business plan. The NTIA expects the engineering to be solid because they expect projects to be built within one year of grant awards. The NTIA can grant a one-year extension for construction in some circumstances. But this rapid construction expectation means the NTIA only wants to see applications from ‘shovel-ready’ projects. Any community thinking of pursuing these grants should be forming the needed partnership immediately.

The grant applications are due by August 17. The NTIA doesn’t expect to start making grant awards until at least November 29. The NOFO offers that the NTIA might award additional funding to approved projects if there is not enough demand for the funding.

The NTIA warns that it will likely not award money to small projects, and it expects awards to be between $5 million and $30 million. That’s understandable when you consider that the agency is going to have to process a lot of the grant requests quickly between August and November. Applicants would be wise to apply early.

While there is no statutory reason that NTIA cannot award 100% grants, they caution applicants that they will favor projects that contribute matching funds of 10% or more – the NTIA wants to see the commercial partners have some skin in the game. They also want these matching funds to be non-federal dollars, meaning the matching shouldn’t come from some other bucket of funding from the $1.9 trillion ARPA program.

This is probably the most unique federal broadband grant I can remember in that the funding is only available to public-private partnerships and no other business structure. Since the grants are only being awarded to the public member of the partnership, this also implies ownership of the network by local governments and some sort of ongoing participation in the business. It’s going to be interesting to see how partnerships are created to meet these grant requirements.

Fast Polymer Cables

Scientists and engineers are always looking for ways to speed up and more efficiently configure computing devices to maximize the flow of data. There are a lot of applications today that require the exchange of huge volumes of data in real-time.

MIT scientists have created a hair-like plastic polymer cable that can transmit data ten times faster than copper USB cables. The scientists recently reported speeds on the new cables in excess of 100 gigabits per second. The new fibers mimic the best characteristics of copper cable in that electronic signals can be conveyed directly from device to device.

https://spectrum.ieee.org/tech-talk/computing/networks/plastic-polymer-cables-that-rival-fiber-optics

Another interesting characteristic of polymer cables is that it’s possible to measure the flow of electrons through each cable from outside – something that is impossible to do with fiber optic cables. This is a key function that can be used to direct the flow of data at the chip level in fast computing devices.

If brought to market, these cables will solve several problems in the computing industry. The new fibers mimic the best feature of copper wires like USB cables are easily compatible with computer chips and other network devices. A copper Ethernet cable can connect two devices directly with no need to reformat data. Fiber cables are much faster than copper but require an intermediate device to convert light signals back into electronic signals at each device.

There are immediate uses for faster cables in applications like data centers, self-driving cars, manufacturing robots, and devices in space. The new cables would be a benefit anywhere that large amounts of data need to be transferred in real-time from device to device. Since the polymer fibers are thin, they could also this also be used to speed up data transfer between chips within devices.

The data transmission rates on the polymer cables are currently at 100 gigabits per second for a distance of around 30 centimeters. The MIT scientists believe they will be able to goose speeds to as much as a terabit while increasing transmission distances to a meter and beyond.

There is a long way to go to move a new technology from laboratory to production. There would first need to be industry standards developed and agreed upon by the iEEE. Using new kinds of cables means changing the interface into devices and chips. There are also the challenges of mass manufacturing the new cables and of integrated them into the existing supply chain.

I’m always amazed at how modern science seems to always find solutions when we need them. We are just now starting to routinely use computer applications like AI that rely on quickly moving huge amounts of data. Just a decade ago nobody would have been talking about chips that needed anything close to 100 gigabits of input or output. It’s easy to assume that computing devices somehow get faster when chips are made faster, but these new cables act as a reminder that there are numerous components required in making faster computing. Fast chips do not good if we can’t get data in and out of the chip fast enough.

Comcast Tests DOCSIS 4.0

Comcast recently conducted its first test of the DOCSIS 4.0 technology and achieved a symmetrical 4-gigabit connection. The test was enabled by a DOCSIS 4.0 chip from Broadcom. The DOCSIS 4.0 standard was released in March 2020 and this is the first test of the new standard. The DOCSIS 4.0 standard allows for a theoretical transmission of 10 Gbps downstream and 6 Gbps upstream – this first test achieved an impressive percentage of the capability of the standard.

Don’t expect this test to mean that cable companies will be offering fast symmetrical broadband any time soon. There is a long way to go from the first lab test to a product deployed in the field. Lab scientists will first work to perfect the DOCSIS 4.0 chip based upon whatever they found during the trial. It typically takes most of a year to create a new chip and it would not be surprising for Comcast to first spend several years and a few iterations to solidify the chip design. Assuming Comcast or some cable company is ready to buy a significant quantity of the new chips, it would be put into the product design cycle at a manufacturer to be integrated into the CMTS core and home cable modems.

That’s the point when cable companies will face to tough choice of pursuing the new standard. When the new technology was announced last year, most of the CTOs of the big cable companies were quoted that they didn’t foresee the implementation of the new standard for at least a decade. This is understandable since the cable companies recently made the expensive upgrade to DOCSIS 3.1.

An upgrade to DOCSIS 4.0 isn’t going to be cheap. It first means the replacement of all existing electronics in a rip-and-replace upgrade. That includes cable modems at every customer premise. DOCSIS 4.0 will require network capacity to be increased to at least 1.2 GHz. This likely means replacement of power taps and network amplifiers throughout the outside plant network.

There is also the bigger issue that the copper plant in cable networks is aging in the same manner as telco copper. There are already portions of many cable company networks that underperform today. Increasing the overall bandwidth of the network might result in the need for a lot of copper replacement. And that is going to create a pause for cable company management. While the upgrade to DOCSIS 3.1 was expensive, it’s going to cost more to upgrade again to DOCSIS 4.0. At what point does it make sense to upgrade instead to fiber rather than tackle another costly upgrade on an aging copper network?

There is then the market issue. The cable companies are enjoying an unprecedented monopoly position. Comcast and Charter together have over half of all broadband customers in the country. While there are households that are unhappy with Comcast or Charter broadband, most don’t have any competitive alternative. The FCC statistics and the screwball websites that claim that Americans have multiple broadband choices are all fiction. For the average urban or suburban family, the only option for functional broadband is the cable company.

This market power means that the cable companies are not going to rush into making upgrades to offer greater speeds just because the technology exists. Monopolists are always slow to introduce technology upgrades. Instead, the cable companies are likely to continue to increase customer speeds across the board. Both Charter and Comcast did this recently and increased download speeds (or at least the download speed they are marketing).

I expect that the early predictions that it would be a decade before we see widespread DOCSIS 4.0 were probably pretty prescient. However, a continued clamor for faster upload speeds or rapid deployment of fiber by competitors could always move up the time when the cable companies have to upgrade to DOCSIS 4.0 or fiber. But don’t let headlines like this make you think this is coming soon.

Satellite Broadband Heating Up

There is a lot of news recently about low-orbit satellite broadband. There is recent news concerning the three primary companies that will be vying in the space.

First is Jeff Bezos Project Kuiper, which is still likely to get a brand name at some point. Project Kuiper has contracted with United Launch Alliance, a joint Boeing-Lockheed Martin venture, for the first nine broadband rocket launches. It’s been speculated that these launches will carry something under 500 satellites into orbit – including the company’s first test satellites.

Project Kuiper has plans to launch 3,236 satellites and the company says it will need 578 satellites to begin offering limited service. The agreement with the FCC would require the company to launch half of the total satellites before 2026, although it appears the company intends to get to that number sooner. There have been no announced dates for the nine launches, but one would think they’ll start this year.

Project Kuiper is taking a different strategy than Starlink and is launching larger, and more capable satellites rather than swarms of cheaper disposable satellites. It will be interesting to see what this difference means in terms of customer coverage and bandwidth. The company has already been funded with $10 billion from Jeff Bezos and it seems likely the company will eventually do what’s been announced.

OneWeb is also back in the news. Eutelsat, one of the world’s largest operators of satellites has invested in a 24% stake in the company. This adds to the existing ownership by the UK government and Bharti Global, a large cellular carrier in India.

OneWeb is taking a third path and plans to launch 648 satellites which are larger and are basically floating data centers. The company recently launched 36 satellites, bringing it to a total of 182 satellites in orbit. The company says it will be able to start serving the UK, Alaska, northern Europe, Greenland, Iceland, and northern Canada after two more launches and plans to be able to serve the whole planet by the end of 2022. It’s no longer clear after the change of ownership if the company will support residential broadband or will pursue connectivity for larger users like cellular towers and corporate users.

Starlink and SpaceX are all over the news. Starlink is now taking $99 deposits from prospective customers and quickly picked up 500,000 prospective customers and a quick $50 million. There is no guarantee that any customer will be able to receive service. Starlink now has 1,300 satellites in orbit and says it will begin offering retail service by the end of 2021.

Starlink download speeds in beta tests still show a range between 50 Mbps and 150 Mbps – a great upgrade for customers using rural DSL or cellular hotspots. Elon Musk continues to say that by next year that broadband speeds will approach 300 Mbps, something that is doubted by a number of industry engineers who question the ability of the constellation to handle a significant number of customers. But there are also problems becoming apparent during the beta test. A recent article in Verge claims that Starlink can’t handle trees or impediments that block the horizon. That’s not promising for serving homes in the woods or mountains – like in my area of western North Carolina.

Starlink also won a recent regulatory battle at the FCC. The ruling is extremely technical, but the gist is that Starlink will be able to deploy some satellites in a lower orbit which will allow a lower elevation angle for customer receivers, which ought to increase the speeds somewhat. But Starlink faces another upcoming battle over the spectrum that is used by the satellite fleet to backhaul traffic to and from the earth. The battle is over spectrum between 12.2 – 12.7 GHz, which is primarily owned by Dish Networks. Dish wants to use this spectrum for terrestrial 5G, and this would greatly curtail Starlink’s backhaul capabilities. The FCC ruling warned Starlink that it may not get access to the spectrum.

Within a year or two, a lot of the hype concerning satellite broadband will be behind us as we start seeing commercial satellite fleets in operation. Of particular interest to those watching this space will be if Starlink can achieve the broadband speeds being touted once the network is under full customer load. By the looks of the applications pouring into the company, it won’t take long to find out.

Mandated Low-Income Prices

ISPs are vehemently opposed to a law recently passed in New York that requires ISPs to offer low-income broadband for prices of $15 to $20. The is a dreadful law for several reasons which I’ll discuss below. But before that, I have to first make fun of the main argument the ISPs are using to fight against the law.

An array of ISP trade associations opposing the law include the New York State Telecommunications Association, CTIA, ACA Connects, USTelecom, NTCA, and SBCA. It seems that the best argument these trade associations can muster for why this law should be invalid is that the State doesn’t have the right to impose this law because only the FCC has rate authority over broadband. The problem with that argument is that the FCC doesn’t have rate authority – or regulatory authority of almost any kind over broadband – due to heavy lobbying by these same trade associations that was able to kill Title II regulation. The group is pointing to a ruling made by FCC Commissioner Pai that says that States have no authority over broadband rates – made at the same time that Chairman Pai killed FCC authority to regulate rates. Chairman Pai’s position was that nobody should have rate authority over ISPs – which means nothing after the FCC walked away and created a regulatory vacuum. States are generally able to fill a vacuum, much like California was able to implement net neutrality after the FCC killed its own regulations on the issue.

There is a long list of reasons why ISPs hate this particular regulation. The first reason is practical. Ignore for a minute that somebody like Comcast might be able to afford this and consider instead a smaller independent fiber overbuilder. It costs on average around $1,000 to install a new customer on fiber just to build the fiber drop and install the needed electronics. It would take 50 months at a $20 monthly fee for an ISP to recover just the cost of the installation – and that still hasn’t paid a penny towards the cost of operating the business. If this law would require a small ISP to install 1,000 subsidized customers (not a big number), the ISP would have to spend $1 million (that it probably doesn’t have) and then be repaid at $20,000 per month. Anybody that knows the small ISP world knows that this scenario is untenable, and the law could sink a fledgling ISP and put them out of business. Small ISPs would have no choice but to ignore this law – the penalties for non-compliance would likely be less costly than the cost of compliance.

Localities have complained about broadband redlining for years. If New York makes ISPs wholly shoulder the cost burden to make broadband affordable, then the state will never see any more broadband investment in poor neighborhoods. ISPs will never come to areas with a substantial percentage of low-income households and the ISPs who are already in such neighborhoods will start figuring out ways to walk away from these neighborhoods. Put this law into place and in twenty years there will be true broadband deserts across New York with large neighborhoods that have no broadband options other than smartphones. Whoever wrote this law didn’t spend much energy thinking about the practical consequences.

The ISPs have an even larger concern with this law. If the State can set a rate for low-income broadband, then there is likely no reason that the State can’t also put a cap on broadband bills. What’s to stop the state from declaring that gigabit broadband can’t cost more than $50?

Even should a State decide to tackle the issue of rate regulation, it ought to do so in a deliberate and reasonable manner by weighing all sides of the rate regulation issue. Legislators are the worst possible group of folks to create this kind of regulation because they will always take the populist view without worrying about the consequences. If New York starts to regulate rates through ad hoc regulation as suggested by this bill, my advice to ISPs would be to never spend another penny in the state and take investments elsewhere. I’m trying to imagine what would happen in New York if Verizon and Charter decided to not invest another dollar. In ten years, the State might have the worst broadband networks in the country.

There is nothing good about this law because it piles the full burden of helping low-income homes on ISPs. Everybody in the industry, including the ISPs understand that the country would be better off if everybody is connected to the Internet. But digital inclusion efforts are going nowhere if ISPs are expected to solely fix the problem. The right solution will require some give by ISPs along with some subsidies from the government. There has to be a middle ground that makes sense, and we’re not going to find that middle ground through legislation like this.

Interim Treasury Rules Part 3

There is one interesting sentence in the Treasury guidance rules describing how to use broadband funding from the $350 million aimed at local governments in the American Rescue Plan Act.

Treasury also encourages recipients to prioritize support for broadband networks owned, operated by, or affiliated with local governments, non-profits, and co-operatives—providers with less pressure to turn profits and with a commitment to serving entire communities.

 Note that the key verb in that statement is ‘encourages’, meaning that the statement is not actionable and doesn’t commit that funding is to used in this manner. But this is the first time that such language has ever been incorporated in anything federal related to broadband. In the past, the language was just the opposite. Past federal grants have always been aimed at existing carriers and they always threw in a footnote mentioning that the money could also be used by entities like municipalities.

This language has to be ultimately coming from the White House – I can’t think of any other reason why it’s in the document. The new administration has said that it’s in favor of using every tool in the toolbox to get broadband everywhere, including enabling municipalities to find local broadband solutions.

In 2015, the FCC under Tom Wheeler tried to expand the ability of municipalities to provide broadband. The FCC issued an order that overturned state laws in Tennessee and North Carolina that prohibited the municipal ISPs in Chattanooga, TN, and Wilson, NC from expanding broadband outside city boundaries. The FCC was testing the waters on its authority to overturn state restrictions on municipal broadband. In a rejection of the FCC’s authority to preempt states, the FCC order was overturned by the Sixth Court of Appeals in August 2016.

The FCC is in an even weaker position today to try something similar since the Ajit Pai FCC eliminated Title II regulation of broadband. An FCC that doesn’t directly regulate broadband is in no position to challenge state broadband rules. Even if the current FCC was to restore Title II authority or something similar, it’s doubtful that the agency can challenge state laws based upon the 2016 ruling.

This means that the only challenge to state restrictions on broadband will have to come from Congress. There has been talk of adding this provision to several of the bills running through the current Congress. But Congressional action on the issue would set off a set of court challenges, which are almost automatic anytime that federal government tries to usurp state laws of any kind.

I hope that any cities or counties reading the Treasury guidelines don’t think that the Treasury has granted local governments the ability to offer retail broadband. The Treasury statement is not much more than wishful thinking for municipalities in states that don’t allow municipal broadband. A municipality that is in a state that restricts municipal participation in broadband must follow the state laws when using this latest federal funding. A city can’t directly build fiber infrastructure if the state doesn’t allow it; a municipality can’t offer broadband services where that’s prohibited.

I think municipalities probably still appreciate this gesture from Treasury because it signals a change in regulatory philosophy at the federal level. But neither the White House nor any of its agencies currently have the power to grant the right to municipalities to offer broadband. It’s an interesting sentiment in this case, but not much more.

Interim Treasury Rules Part 2

On Wednesday I published a blog that discussed my perceptions of the Treasury’s Interim Final Rules for using the $350 billion of funding provided to localities from the American Rescue Plan Act. Today I want to emphasize that blog was based strictly upon the suggested rules from Treasury. It’s important to realize those rules are interim only, and any changes to the rules could have a drastic impact on how the funding can be used for broadband.

Note that Treasury asked five questions related to broadband. These are important enough that I list them:

Question 22: What are the advantages and disadvantages of setting minimum symmetrical download and upload speeds of 100 Mbps? What other minimum standards would be appropriate and why?

Question 23: Would setting such a minimum be impractical for particular types of projects? If so, where and on what basis should those projects be identified? How could such a standard be set while also taking into account the practicality of using this standard in particular types of projects? In addition to topography, geography, and financial factors, what other constraints, if any, are relevant to considering whether an investment is impracticable?

Question 24: What are the advantages and disadvantages of setting a minimum level of service at 100 Mbps download and 20 Mbps upload in projects where it is impracticable to set minimum symmetrical download and upload speeds of 100 Mbps? What are the advantages and disadvantages of setting a scalability requirement in these cases? What other minimum standards would be appropriate and why?

Question 25: What are the advantages and disadvantages of focusing these investments on those without access to a wireline connection that reliably delivers 25 Mbps download by 3 Mbps upload? Would another threshold be appropriate and why?

Question 26: What are the advantages and disadvantages of setting any particular threshold for identifying unserved or underserved areas, minimum speed standards or scalability minimum? Are there other standards that should be set (e.g., latency)? If so, why and how? How can such threshold, standards, or minimum be set in a way that balances the public’s interest in making sure that reliable broadband services meeting the daily needs of all Americans are available throughout the country with the providing recipients flexibility to meet the varied needs of their communities?

I can’t stress enough that having somebody other than the FCC asking these questions is really unusual. While Treasury feels they need to understand these issues better to establish final funding rules, these are largely policy questions. As such, the responses are not easy to describe to Treasury. If the FCC asked these questions I might respond – but in doing so I could take shortcuts in my responses since I could assume that there are many things that the FCC already understands about broadband.

But I couldn’t make that assumption in responding to Treasury. How does one go about explaining the long history and disaster that is the FCC 477 mapping data? The nuances of the FCC data are a major factor in discussing anything related to 25/3 Mbps. How does one convince Treasury the extent to which many ISPs have seemingly fabricated the 477 reporting?

Even more fundamentally, how does one talk about what it means to reliably deliver 25/3 Mbps speeds? Would they believe you if you say that no DSL connection has reliable speeds and that speeds can easily vary by 100% or even far more during an average day? How do you explain how hard it is to measure speeds in the first place – something that’s never been grasped by the FCC. The questions assume that the speed of a product to a single customer can be cleanly defined – and the real-world broadband speeds don’t work like that.

I’m really perplexed by question 23 that asks to describe all of the issues that might define practicality for a given business plan? I write 200-page reports that try to answer that question for clients.

I have to admit that it makes me nervous to have a federal agency that has nothing to do with broadband ask these kinds of questions. Those of us who might be able to answer these questions by spending a day at a whiteboard will not have the time or ability to fully answer in writing the complex questions that Treasury has asked. The answers they are going to get are from the various trade associations that are all going to be heavily biased with a particular industry point of view. After hearing from the cable association, the fixed wireless association, the fiber association, and the big and the small telco associations I can’t imagine how Treasury will reconcile the differences from each of these responses. I can read something from a trade association and quickly tell you what is factual versus pure bosh – but how will Treasury do this?

I have to say that the fact that these questions are still open makes me nervous. Cities, counties, and states are trying to make plans on how to allocate and spend this funding. If this process takes too long, local governments are going to use the money for purposes other than broadband. If the final rules are muddy or too restrictive, localities will also find it easier to pass on using the money for broadband. Even after Treasury gets answers to these questions, I can’t imagine a reasonable way for them to somehow interpret the wide range of responses they’ll get to create a coherent policy – something the FCC has frankly never done well. Ideally, Treasury will adopt what they’ve already written for broadband and be done with it.

A Look Back at the Pandemic

I was looking back at industry reporting a year ago after the impact of the pandemic first hit our broadband networks. Almost every big ISP issued press releases talking about how well it had weathered the pandemic and bragged about the resiliency of its networks.

It turns out that these ISP press releases largely missed the point. They are right that their networks didn’t crash, but once we understood the nature of the changes in broadband traffic due to the pandemic that wasn’t a big surprise. The pandemic caused a huge upsurge in daytime broadband traffic. People who were normally at work or at school were suddenly using the Internet from home on weekdays. ISP networks largely didn’t crash because the new daytime volumes were no larger than the evening peak usage that ISPs had been handling for years. ISPs have always engineered networks to handle the busy hour – that time of the day when networks are the busiest. In residential neighborhoods that has always been sometime in the evening.

OpenVault documented the surge in Internet traffic volumes. At the end of the first quarter of 2020, the average home was using 403 gigabytes of broadband per month, up 47% over the average usage of 274 gigabytes a year earlier. But since the extra usage happened during the daytime, when networks had been typically quiet, the ISPs easily handled most of this surge. If this extra usage had come in the evenings, we would have seen some spectacular crashes.

It’s interesting to go back and read the press releases at the time when ISPs seemed downright giddy at having survived the storm. The network engineers at these ISPs fully understood that their networks were not in danger, but ISP management wanted to garner praise from regulators and politicians for having robust networks.

Interestingly, I can’t find a single early press releases that talks about the growth in upload bandwidth. OpenVault recently released a special report about the surge in upload bandwidth during COVID. This is the area where many ISPs failed during the pandemic, but none of them mention the issue. The average home upload usage exploded from 19 gigabytes per month in January 2020 to 27 gigabytes by April – a growth of 42% in a short time. By the end of 2020, the average upload usage had grown to 31 gigabytes per month – a growth rate for the year of 63%.

Every technology other than fiber didn’t handle this upload surge well. The surge in upload demand was twice the growth in download demand. OpenVault says that upload demand in the daytime grew by 99% with the pandemic, with overall growth of upload demand at 63% for the year.

I have yet to see a major cable company admit they had a problem with upload usage. My consulting firm did a lot of residential surveys during COVID and we routinely saw about 30% of homes say that cable broadband was not adequate during the COVID. People said specifically that their broadband was inadequate for working or schooling from home. The 30% negative response represents a large portion of the homes that were trying to cope working and schooling from home.

It’s also worth remembering that homes use upload bandwidth for more than school and work connections. A lot of homes today automatically upload and store pictures and work files in the cloud. Microsoft and other mainstream software have pushed hard for people to use the cloud version of its software. Our IoT devices are often storing health and other data in the cloud. A lot of gaming has moved to the cloud. Our computers and other devices routinely communicate back and forth with the cloud to make sure our software is up to date. All of these uses grew during the pandemic year as people spent more time at home instead of at school or the office.

According to the OpenVault data, the growth in upload usage has not abated. It looks likely that our demand for upload demand isn’t going away. And yet, I still don’t hear a peep out of the cable companies about how they are going to deal with the new demand.

Treasury Releases Broadband Grant Rules

Earlier this week the Department of the Treasury released an Interim Final Rule that defines how the $350 billion of pandemic relief funding from the American Rescue Plan Act can be used. The money is being sent to states and localities to use in a wide variety of ways that include broadband.

I must admit that on first reading that I was unhappy with the Treasury ruling. The strictest reading of the rules is that the money is intended to be used in areas that don’t have broadband speeds of at least 25/3 Mbps, which can be updated with technologies that deliver speeds of at least 100/20Mbps, but more preferably 100/100 Mbps. We just went through the RDOF grant that supposedly allocated funds to most places in the country where the FCC thinks speeds are less than 25/3 Mbps. I first envisioned having trouble finding places to use this money.

But I read it a few more times, and my 35 years of reading government regulatory orders kicked in. The more I read this, the more I started seeing some ways that localities can use the money in creative ways. The first thing I noticed after a few re-readings is there is no shall language in the broadband rules. Government lawyers are always careful to use shall for anything that is mandatory and the work ‘shall’ is not included anywhere in the broadband guidelines.

Next, I realized that localities don’t have to ask anybody how to use the funds for broadband. There is no application process for grants – no need to write a grant request to justify how to use the money. There also are no implied penalties for not using the funding in a specific manner. The CARES funding from last year included strongly worded claw-back language that threatened to take back any funding that States didn’t use properly. There is no claw-back or penalties associated with this funding that I can find.

There is also what I would call soft regulatory language – the kind of language that causes just enough ambiguity to indicate that the authors of the language want loopholes. Localities are provided ‘flexibility’ in choosing the parts of the community to serve. While there is an indication that the money can only be used in areas with 25/3 Mbps or slower broadband, that rule is modified to say areas that are “reliably” delivering 25/3 Mbps. There are a whole lot of urban networks that don’t reliably deliver good speeds.

The order goes on to ‘encourage’ behavior by localities. Communities are encouraged to consider broadband affordability. Communities are encouraged to concentrate on last-mile connections. And communities are encouraged to use the funding for projects that are operated by or affiliated with local government, non-profits, and cooperatives.

The big kicker for me is the last paragraph of the broadband discussion the broadband section that “Under sections 602(c)(1)(A) and 603(c)(1)(A), assistance to households facing negative economic impacts due to COVID-19 is also an eligible use, including internet access or digital literacy assistance.” I’ve read this many times and I conclude that this is a second separate use for the funds – the funds can be used to bring broadband to areas that don’t have 25/3 Mbps OR the funds can be used to assist households hurt by the pandemic. I read this as giving full cover to use the funding in cities for broadband for neighborhoods that need help.

I’m not lawyer, and my reading of the language could be wrong, and perhaps the cable company lobbyists think the language is restrictive.   But I’ve been reading regulatory rulings for decades and I just can’t find the kind of language in this ruling that demands using the money in only a subscribed way. Additionally, I keep coming back to the language that sprinkles advice in the ruling to be flexible.

It’s worth noting that this document is not final and could still change. The big ISP lobbyists will be trying to add stronger language to a final order while rural and municipal proponents will want to clarify even more that localities have a lot of options on using the money. It’s going to be an interesting several months while the industry dissects and parses the language in this document. I sure hope my reading is right because I see the flexible ways for communities of all sizes for using this funding to tackle the digital divide.

Those Troublesome FCC Maps

The FCC is in the process of reworking its broadband maps. The task of doing so is complicated and the new maps are likely going to be a big mess at first. In a recent article in Slate, Mike Conlow discusses two of the issues that the FCC will have trouble getting right.

One issue is identifying rural homes and businesses. We know from recent auctions that the FCC assumption on the number of homes in a Census block is often wrong.  It’s hard to count homes without broadband if we don’t know how to count homes in general. The mapping firm CostQuest suggests counting homes using satellite data. But the article shows how hard that can be. For instance, it shows a typical farm complex that has multiple buildings. How does an automated mapping program count the homes in this situation? Mixed among the many farm buildings could be zero homes, one home, or several homes.

If you have ever looked at satellite maps in West Virginia, you see the opposite problem. There are homes under total tree cover that aren’t seen by a satellite. To really complicate matters, there are several million rural vacation homes in the country, many not more than a shack or small cabin, many without power. How is satellite mapping going to distinguish a cabin without power from a home with full-time residents? It’s unlikely that a national attempt to count homes using satellite data is going to get this even close to right – but it means many millions to CostQuest to try.

The second mapping issue comes from ISPs that will have to draw polygons around service areas that have broadband or can get broadband within 10 days of a service request. The article shows a real example where it’s easy to draw a polygon along roads that will leave out homes that are back long lanes or driveways.

When ISPs convert to this new mapping with the polygons, especially if housing data comes from satellite imagery, the resulting maps are going to have a lot of problems. The first iterations of the new maps will differ significantly from today’s mapping and it’s going to be nearly impossible to understand the difference between old and new.

As complicated as these two issues are, they are not the biggest problem with the mapping. The big issue that nobody in Congress or the FCC wants to talk about is that it’s nearly impossible to know the broadband speed delivered to a home. For most broadband technologies, the speed being delivered changes from second to second, and from minute to minute. If you don’t think that’s true, then run a speed test at home a few dozen times today, every few hours. Unless your broadband comes from a stable fiber network, the chances are that you’ll get a wide range of speed test readings. After taking these multiple tests, tell me the broadband speed at your house. If it’s hard to define the speed for a single home, how are we supposed to tackle this in mass?

But let’s just suppose that in some magical way that the FCC could figure out the average speed at a home over time. That still doesn’t help with the FCC mapping because ISPs will be allowed to report marketing speeds and not actual speeds to the FCC. The Slate article suggests that the biggest problem in today’s maps comes from counting broadband by Census blocks – where if one home has fast broadband, the entire Census block is counted as fast. That is a much smaller issue than people assume. The majority of misstated rural speeds today come instead from ISPs that claim they sell a speed that is much faster than what is delivered. Big telcos today report rural areas as having 25/3 capability for no reason other than the ISP says so – when in reality there might not be even one customer in that area that has even 10/1 Mbps DSL. The big telcos have successfully been lying about speed capability for years as a way to shield areas against being overbuilt by grants. Recall that Frontier tried to sneak in over 16,000 speed changes for Census blocks just before the deadline of the RDOF grant. The new mapping is not going to be a whit better as long as ISPs can continue to lie about speeds with impunity.

There are a few simple ways to fix some of the worst problems with the maps. First, the FCC could declare that all DSL is no longer broadband and stop bothering to measure DSL speeds. They could do the same with high-orbit satellites that have huge latency issues. But even doing this solves only a portion of the problem. There are still numerous WISPs that report marketing speeds that are far faster than actual speeds. The FCC maps are also about to get inundated by the cellular companies making the same overstated speed claims for fixed rural cellular broadband.

What is so dreadful about all of this is that a rural home may have no real option for broadband but might have FCC maps that show they can buy fast broadband from DSL, one or more WISPs, and one or more fixed cellular providers. The FCC is going to count such a home as a success because it has competition between multiple ISPs – when in reality the home might not have even one real broadband option.

I hate to be one of the few people that keep saying this – but I’m sure that the new FCC maps won’t be any better than the current ones. Unfortunately, by the time that becomes apparent, Congress will have assumed the mapping is good and will have moved on to other issues.