The Price for Faster Upload Speeds

I’ve always been impressed by the marketing folks at the big cable companies. They are masters of extracting money from customers willing to pay for better broadband. The latest example comes from Comcast. The company is introducing a new product in the Northeast that offers faster upload speeds – for a price.

Comcast knows that its biggest weakness is upload speeds. The current upload speeds for products with download speeds up to 300 are only at 10 Mbps. The upload speeds for the current 600 Mbps and 800 Mbps products are at 20 Mbps.

Comcast is increasing download speeds across the board for no extra charge – this will catch the Northeast up to much of the rest of the country where speeds have already been increased.  But rather than highlight the deficiency of the technology, Comcast has created a new ‘premium’ product labeled as xFi to bring faster upload speeds. Comcast will charge $25 per month to upgrade the upload speeds to as fast as 100 Mbps.

The following chart shows the download speeds today and the speeds after the automatic speed upgrade. The chart also shows the associated upload speeds – both the current speeds and what will be provided by customers willing to spend an extra $25 per month. Existing gigabit customers won’t see a download speed increase but will be able to buy faster upload speeds for the $25 price.

Download Upload
Current Upgraded Current xFi
50 Mbps 75 Mbps 10 Mbps 75 Mbps
100 Mbps 200 Mbps 10 Mbps 100 Mbps
300 Mbps 400 Mbps 10 Mbps 100 Mbps
600 Mbps 800 Mbps 20 Mbps 100 Mbps
800 Mbps 1 Gbps 20 Mbps 100 Mbps
1.2 Gbps 35 Mbps 100 Mbps
2 Gbps 100 Mbps 200 Mbps

The upgrades in download speeds are supposed to happen over the next few months. The upload upgrades will come at some unspecified time next year.

To make it even more expensive, the xFi upgrade will only be available to customers who are also leasing a Comcast Wi-Fi 6E modem that costs $14 per month. The faster upload speeds won’t work on customer-owned modems. That brings the total cost to get faster upload speeds to $39 extra per month.

For years I’ve been saying that the big cable companies are going to be charging $100 for basic broadband. It looks like Comcast has gotten there sooner than I predicted with this upgrade – at least for customers willing to buy broadband that works.

The Comcast price today for the standalone basic 100 Mbps broadband product is $80. Customers who want to get faster upload speeds with xFi will now be paying $105, plus another $14 for the mandatory modem to get the faster upload – a total of $119. You have to give Comcast credit for being audacious and going for the big price increase all at once. Of course, many Comcast customers get a bundling discount, and new customers get promotional discounts – but with xFi, even those prices are likely to be at $100 or more.

This is just speculation, but I’m guessing that Comcast can’t give everybody faster upload speeds due to network limitations. Rather than admit a network deficiency, the Comcast marketing folks have prettied this up as a premium product. Doling this out only for those willing to spend more will extract the highest new revenues possible without bogging down the network.

One thing that is not being mentioned is that giving some customers faster upload speeds probably means a little slower uploads for everybody else – which will drive even more folks to pony up the extra money.

There is an easy way to get faster upload speeds without paying extra. Many homes in the Northeast can already get symmetrical broadband speeds on Verizon FiOS, and anybody thinking of paying extra to Comcast ought to consider that switch. But for customers in non-FiOS areas, this upgrade is probably the only way to get an upload link that works for a family with multiple broadband users. This new pricing is crying out for new fiber competition. An ISP can build fiber, charge $80 or $90 for symmetrical gigabit, and still bring savings to customers. I always expected that to happen, but not this soon.

It’s likely that Comcast will roll out this product in the rest of the country, and the other Comcast areas have the added burden of paying for data caps. Comcast never put data caps into the Northeast because of Verizon FiOS, but in the rest of the country, any consumers that use more than a terabyte of data in a month pay even more.

Using AM Radio Towers

One existing resource that is often overlooked in designing wireless networks is AM radio towers. For the most part, companies deploying fixed wireless and microwave antenna have avoided these towers. This is due to the nature of AM radio, which transmits at such a low frequency that the entire tower is effectively used as the transmitting antenna. The entire tower is energized with the AM signal, and the typical AM tower sits on a base insulator that blocks the tower from being grounded. The conventional wisdom has been to avoid the AM towers as being too hot in power and frequency to use for other purposes.

There is an additional problem with AM towers in that any tall metal structure within about three kilometers of an AM tower can become what is called a parasitic radiator and can interfere with AM transmission. This has meant that nobody builds other wireless towers close to an AM tower, and the areas around an AM tower are often cellular dead spots – to the detriment of folks that happen to live close to a tower. Since there are around 10,000 AM broadcast towers, this implies many thousand wireless dead zones.

But the AM towers don’t have to be a wasted asset. There are two methods that can be used to install other radios on AM towers that often get overlooked by cellular companies and WISPs. The methods both rely on isolating the new antennas from ground at the same frequency as the AM transmission.

The first technique is known as a folded unipole. This consists of a vertical metal rod, called a mast, that is connected at the base of the AM tower to a conductive surface called a ground plane. The mast is surrounded by a series of vertical wires attached at the top of the mast and extended to a metal ring near the mast base. The feed line for the mast is connected between the ring and the ground. These wires must be mounted at carefully calculated heights. If installed properly, the tower can be isolated and used for other radios. This is a common technique used to connect an FM transmitter to an existing AM tower, but it can also allow for cellular or fixed wireless radios.

The other method for isolation is to install electronics on the transmission line that carries the radio content signal to the antenna. The most common device is called an iso coupler, which allows RF signals within a certain frequency range to pass through while continuing to isolate the AM signal from ground. That might mean allowing through the signal from cellular or fixed wireless electronics to bypass the effects of the AM signals on the tower. Another device that performs roughly the same function is a coil device that can isolate the new antenna signals from the AM signals.

Both of these methods are referred to as detuning, meaning that a new radio can be isolated from the tuned AM signal that permeates the whole tower. Most engineers who are looking for towers avoid AM towers in the belief that it’s too complicated or costly to detune the tower to add other transmitters. Admittedly, getting this to work requires an experienced RF engineer who understands AM towers. But it’s a common practice used most often for adding FM transmitters. I’ve talked to some folks who say the process can be surprisingly affordable.

Anybody looking for tower space shouldn’t shy away from this option because the folks who own AM towers are likely open to negotiating an affordable connection since they don’t often get the opportunity.

Restricting FCC Mapping Data

Last week, the FCC rejected dozens of requests from ISPs to keep confidential the method that the ISPs use to identify broadband coverage areas. This was prompted by the FCC requiring each ISP to explain to the agency how it determined broadband coverage areas in the latest round of gathering data for the FCC broadband maps.

Several dozen ISPs then asked the FCC to keep those responses confidential, with most ISPs arguing that the method of how customers are counted reveals proprietary data about the ISP networks. The FCC rejected all such arguments and commented that the public needs to know how customers and coverage areas are determined if there is to be any meaningful review and challenge of the FCC mapping data.

By the way, we now have some new industry acronyms. The FCC is referring to the new mapping process as the BDC (Broadband Data Collection initiative). A second new acronym is mapping fabric, meaning the underlying data that supposedly shows the location of every building in the country where somebody could order broadband. It’s always been hard to know if it’s deliberate but referring to regulatory efforts using new acronyms acts to confuse the public about what is going on. Somebody reading a news article talking about the BDC and challenges to the fabric likely has no idea what is being discussed.

All of this matters because the FCC has already started the process of allowing challenges to the mapping fabric. Local governments and ISPs are now able to challenge the locations of the ‘passings’, which are residential and business locations that could be a customer for broadband. There have been early comments made that there are a lot of errors in the fabric developed by CostQuest. There are some places where too many passings have been identified, such as a farm where there are multiple buildings, most of which are not candidates to buy a broadband subscription. I’ve also heard there are places where a lot of actual passings are missing from the map. Most confusing is that there are a lot of places in the country that nobody knows how to count – such as vacation cabins.

One of the biggest hurdles to the fabric challenge is that the FCC mapping fabric data is not widely available for the public to examine. CostQuest has provided free access to localities to review local data, although some local governments are saying that it has been a challenge to get access to the data. Unfortunately, the contract between CostQuest and local government restricts the use of the data only for purposes of challenging the fabric data. It seems a local government can’t disclose details about the fabric to its citizens.

The FCC mapping data is not being made available to the general public. This makes challenging the maps difficult for rural counties, which mostly don’t have the resources to take the time to understand, let alone challenge the maps. Keeping the data proprietary means that the general public can’t participate in this challenge. In many rural counties, there are ad hoc broadband committees that would devote the time and energy to understand the local maps. But local folks who are interested in broadband but who are not officially sanctioned by the local government are not being allowed access to the data. The data is also not available generically to nonprofits, consultants, or others that have the technical skills to analyze the data.

I guess this means that the mapping fabric challenge is supposed to be done by local governments that don’t have the staff, funding, time, or technical expertise to understand the mapping fabric, let alone suggest corrections. Most of the rural counties I know are not reviewing the fabric – meaning that nobody is reviewing the broadband data in the places that most need better broadband.

I know several folks who are trying to find out how this happened – how a commercial business like CostQuest is allowed to act as if it owns the mapping data. Apparently, the FCC contract with CostQuest has given the company the right to monetize the data. I hope after the facts are better known that Congress will step in and makes all FCC mapping data open to the public.

I wrote recently about the data divide – where public data is not making it to the folks that need it the most. The federal government is spending huge amounts of money developing maps that show areas with and without access to broadband. I can’t think of a single reason why this data isn’t available to everybody. But I can think of two reasons to keep the data restricted. First, this will tamp down on a raft of news articles talking about errors in the mapping fabric. The second reason is to give CostQuest the chance to monetize the process. In my opinion, these are both unacceptable ways to treat data that was created with taxpayer money.

Can You Do it Yourself?

By now, you’ve probably heard about the Michigan man who built his own ISP when Comcast and AT&T wanted too much money to extend broadband to his neighborhood.

Jared Mauch lives in Scio Township, Michigan, outside of Ann Arbor. Jared is a senior network engineer for Akamai. He bought his house in 2005, and at the time, paid a lot extra for a T1, which delivered 1.5 Mbps. He assumed somebody would eventually bring faster broadband. He contacted Comcast and was quoted a cost of $50,000 to extend the cable network to his home. AT&T finally brought DSL to his neighborhood five years ago, but the top advertised speed was 1.5 Mbps. Jared was eventually able to connect to a WISP that offered 50 Mbps download.

Jared finally decided that he could make this work himself, at least in his small neighborhood. He created Washtenaw Fiber Properties and self-funded and built five miles of fiber that connected the homes in his neighborhood. He’s been able to sign up 70% of homes. He offers 100 Mbps symmetrical for $55 and gigabit service for $79. He charges $199 to help offset the cost of the installation. This made Jared one of the smallest fiber overbuilders around – but he is not unique, and folks have done this in neighborhoods where nobody else would.

Jared got a lot of press when he won a grant from the County in May of this year for $2.6 million to expand his network to reach 417 additional unserved addresses. The expansion will require 38 new miles of fiber, and he has until the end of 2024 to build the network. The grant came from the $71 million that the County received in ARPA funding from the federal government. The County is investing $15 million in rural broadband and also awarded grants to three other ISPs.

Jared’s story is not unique, although most tiny ISPs use wireless technology and not fiber. There are many dozens of similar stories of how rural WISPs got started. An example is Grizzly Broadband, a WISP started in rural Manhattan, Montana. Two decades ago, Craig Corbin founded the WISP when he wanted something faster than dial-up at his home, and he wanted to operate a server at his business. He started with radios on a single tower and grew over time. He eventually has been able to build some fiber-to-the-home and is now expanding the fiber network each year.

I think the Washtenaw story got headlines because a tiny home-grown ISP won a sizable broadband grant. That is, unfortunately, a unique experience in the world of grants. Many federal grant rules say they are open to everybody and support the idea of local broadband solutions. Washtenaw Fiber and Grizzly Broadband are exactly what a local broadband solution looks like – folks bringing broadband to their neighbors when nobody else will.

But the Michigan grant only happened because the County trusted the small local ISP. Unfortunately, for small ISPs like Washtenaw, there is little hope of expanding further using the various federal grants like ReConnect and BEAD. While these grants say that anybody can apply, the specific ways to qualify for these grants say otherwise. An ISP needs to jump through a lot of hoops for a federal broadband grant. That would start with having a history as an ISP – something Grizzly has but Washtenaw doesn’t.

A federal grant applicant has to be able to guarantee the funding through certified letters of credit, and that means getting a financial institution to bless their business to the point of pre-approving a line of credit before any grant is awarded. That is something the Small ISPs can’t do. It was nearly impossible a few years ago when interest rates were low and credit was a little easier, but banks have already started to retract from making new loans as a result of interest rates and tightening by the Federal Reserve. There are a dozen other nuances of federal grant rules that are hard for small companies to navigate and qualify.

I can understand why the NTIA and the USDA favor existing ISPs. The agencies do not want headlines about a grant applicant that fails. But that means the rhetoric that says that the grants are available to anybody is a feel-good fiction. If somebody is not already successful as an ISP with a strong balance sheet, and a reliable banking relationship, then the reality for federal (and most state) broadband grants is that they need not apply.

Regulating Hidden Fees

Some of the big telcos and almost every large cable company uses what the industry calls hidden fees. These are fees that are not mentioned when advertising for a service but are put onto customer bills. The cable companies have the most egregious fees, in many cases over $20 per month for new video subscribers.

There is a class action lawsuit in California that shows why ISPs are not worried about using hidden fees. In times past, when the big companies were regulated, they might have been ordered to make a 100% refund of a fee that regulators decided was questionable. But the only realistic remedy against ISPs that misbill customers is a class action lawsuit or the rare ruling against a single ISP by the Federal Trade Commission.

There has been a class action lawsuit in California about the ‘administrative fee’ that AT&T charges to wireless customers. That fee started at $1 per month in 2013 and was raised to $1.99 in 2018. There is no basis for this fee – it’s just a portion of the cost of service split off into a separate charge. This lets AT&T advertise rates for $2 less than the actual fee charged to customers. Somebody buying a $60 advertised plan will actually pay $61.99 because of this fee.

The Verge reported earlier this summer that AT&T and the plaintiffs in a class action lawsuit reached an agreed settlement, and AT&T is refunding $14 million to California wireless subscribers who make a claim. The class action lawsuit claimed that AT&T billed the fee without notifying the public or advertising the fee. But even in agreeing to the settlement, AT&T refused to admit any wrongdoing and says it fully disclosed all fees.

This award shows why big carriers can bill hidden fees with impunity. The typical settlement for a customer that makes a claim under this lawsuit will be between $15 and $29, which is far less than the average amount of this fee collected by AT&T in California at $180 per subscriber. The worst part of the settlement is that AT&T will continue to bill the fee, so they’ll recover any settlement from customers over the next year. AT&T also knows that most eligible customers won’t make a claim. It was reported that AT&T notified customers of the possible claim by text – which many people assume is spam. The settlement only applies to California customers and not folks in the rest of the country. This is a minuscule slap on the wrist to AT&T.

Class action lawsuits are not a great tool for punishing bad behavior by carriers. Lawyers taking on these issues are taking a big chance that they will lose. Anybody filing such a suit has to spend a lot of time on discovery, made worse because carriers will typically drown plaintiffs with mountains of documents in response to data requests. The lawyers employed by large corporations are generally the best around, and many class action suits never reach completion. In this case, the class action lawyers will receive $3.5 million from the settlement – but they likely spent a lot of money over many years to get the case to a settlement.

The real solution to holding ISPs accountable is strong regulation. In an ideal world, the FCC or the California Public Utilities Commission would have ordered a full refund to customers that were harmed by misdeeds by a carrier. I didn’t do the research in writing this blog, but I assume that neither regulatory body felt it had that authority in this instance – or else they chose not to take it on. That’s certainly not surprising on the Federal side since the FCC under Ajit Pai prided itself on a shift to light-touch regulation – which is a euphemism for basically no regulation at all. When I broke into the industry in the 1970s, regulators would have made a carrier rebate every cent of an overbilling, so carriers were cautious about trying something like the administrative fee.

It is within the purview of the Federal Trade Commission to tackle this sort of issue, but the agency only has the manpower to pursue a limited number of cases against bad behavior of industries of all types. Companies like AT&T know that the risk of having an issue like this brought before the FTC is tiny. And even if it happened, the company would not likely have to return all of the improperly charged fees.

Hidden fees are an interesting issue because it’s clear that hidden fees give carriers a marketing edge when competing against companies that don’t have hidden fees. The intent of carriers is to hide the fees or at least make it hard for a prospective customer to know about the fees. The issue with hidden fees is not that a company divides a fee for service into several pieces – it’s that the full fees are not disclosed. ISPs and carriers are not the only ones using hidden fees, and President Biden said last month that the administration is going to crack down on hidden fees from the airline and travel industry.

Economy-of-Scale for ISPs

I’ve worked with a number of small communities that want to explore the idea of having a community-owned ISP. My advice to small communities is the same as with all clients – economy-of-scale really matters for ISPs.

Economy-of-scale is the economic term for describing how businesses get more efficient as they get larger. It’s fairly easy to understand, and the classic example is to look at the impact of the salary and costs of the general manager of an ISP. Consider the example where the all-in cost of salary and benefits of the general manager is $200,000 per year. If the ISP has 200 customers, that cost works out to $82 per month per customer – obviously impossible to cover. At 2,000 customers, that’s a cost of $8.33 per month per customer – probably doable, but a big strain on being profitable. At 20,000 customers, the cost is reduced to $0.83 per month – a cost that is easy to absorb. With a customer base of 200,000, this cost is only 8 cents per customer per month.

A large percentage of the costs of operating an ISP are fixed or nearly fixed. Any fixed cost acts in the same manner as the general manager’s salary. The larger the size of the ISP, the easier it is to cover fixed costs and the better the chance of being profitable.

It’s possible for a small ISP to break even, but doing so requires the operator to be extremely frugal – any unexpected expense can throw a tiny ISP into a loss. Operating a small ISP of a few hundred customers is best described as a labor of love – because it is not going to be profitable for the owner.

For many years I’ve said that the bare minimum number of customers to enable an ISP to be full-function was around 2,000. That’s enough revenue to cover the labor for a few employees, along with other operating expenses. An ISP with 2,000 customers still needs to keep an eye on expenses because it doesn’t take much to tip the business into losing money. However, with current inflation, I think the minimum size to be effective has probably grown closer to 3,000 customers.

In building business plans, I’ve always seen the real benefits of economy-of-scale kick in around 20,000 customers. That’s enough customers to be able to operate a full-function ISP that can deliver superior customer service. There is enough revenue to hire all of the needed staff and pay them well, including good benefits. ISPs smaller than 20,000 have to forego some of the benefits that come with size.

Interestingly, economy-of-scale doesn’t scale forever. In my experience in the industry, I see that ISPs of a certain size start getting less efficient. It’s going to be unique to the specific company, but ISPs larger than 200,000 or 250,000 start being less efficient.

I’ve always credited this with the phenomenon where the span of management control gets too broad. At some size, the core management team doesn’t know what is going on at the street level. When that happens, it’s inevitable to start seeing bureaucracy creep in, which is the curse of the giant ISPs. Local or regional management starts determining policies, often to the detriment of the overall business. The largest companies manage through a process of pitting regions against each other to earn bonuses. That’s far different than running an ISP with 20,000 customers. Almost all of the horror stories we’ve heard over the years about the poor treatment of Comcast customers can be attributed to regional managers who made cuts or implemented policies that benefitted their bonuses rather than benefitting the customers.

Try as they might, the giant ISPs are never going to have the same level of customer service as the smaller ISPs they compete against. Witness the many decades of duopoly competition between Comcast and Verizon FiOS. They are both big companies, and customers don’t love either of them in the way that I see customers being loyal to smaller ISPs.

Of course, size isn’t everything, and there are small ISPs who are terrible at the day-to-day operations of the business. Economy-of-scale refers to the scaling of costs and has nothing to do with the philosophy of how an ISP treats employees or customers.

My advice to any ISP with under 5,000 customers is to consider how much easier it would be to operate the company if it grew to 10,000 or 20,000 customers.

Defaulting on RDOF

Starry recently announced that it was defaulting on all of its $269 million of RDOF funding. Starry was the ninth-largest winner of the RDOF reverse auction that ended in December 2020. The FCC approved some of the Starry RDOF claims in August.

There have been other defaults of RDOF, but no others of this magnitude. For example, in the same announcement of the Starry default were additional defaults by and GeoLinks. There were a lot of defaults in the spring of 2021 when winners defaulted on small pockets of Census blocks that weren’t large enough for a coherent business plan.

Starry is not required to disclose why it’s defaulting. In the many articles about the RDOF default, there was a lot of speculation that the company doesn’t have the needed funding to complete the required builds. Starry reported 77,400 customers at the end of the second quarter of this year – gaining 14,300 customers in the quarter. The company claimed that it now passes 5.7 million potential customers. But the company has a big burn rate with a loss for the quarter of $33.9 million plus capital expenditures of $20.8 million.

Even if funding is the issue, funding wouldn’t yet be an emergency for Starry. An RDOF winner has three years starting with the year after the awards – in this case until 2025, to cover 40% of the RDOF areas. But delaying the cancellation probably risks increasing fines from the FCC.

I’ve also heard speculation from engineers that Starry might not have been happy with the performance of its technology in rural areas. It seems like a technology best suited to areas with decent household density. The technology being deployed can best be described as a wireless mesh network. Starry brings broadband into a neighborhood and then bounces signal from customer to customer to extend the reach of the network. Over time as the company gets more customers, it can blanket a large coverage area. This is a drastically different approach than the FWA cellular wireless deployments that reply on putting a small cell site in every served neighborhood – most of them fed by fiber. The Starry deployment should need fewer fiber-fed hubs and theoretically would have a lower cost deployment.

In June 2021, Starry announced a deployment across the Columbus, Ohio metropolitan area. But there is a big difference between the densely populated suburbs of Columbus, Ohio and rural areas in RDOF where homes might not be within sight of neighbors. There are plenty of engineers that are still skeptical of wireless plans using tall towers to bring fast speeds to rural areas. It’s even hard to imagine doing it with a mesh network.

With the default, all of the RDOF areas are back in play for other federal grants. Unfortunately for the customers in these areas that thought they had a broadband solution coming, they now need another ISP to step up and claim grant funding of some sort to bring broadband.

As can be seen on the map below of the Starry award areas, the company had claimed sizable service areas in Alabama, Arizona, Mississippi, Missouri, Nevada, Ohio, Pennsylvania, and Virginia.

Starry’s default is different than the recent action by the FCC to toss the RDOF awards to LTD Broadband and Starlink. The FCC had already made some awards to Starry, and the assumption is that it would have made the rest. The bottom line is that the Starry default is one more piece of the puzzle of solving the rural broadband gap, and the ISPs located close to the Starry defaults should take a hard look at changing grant plans.

CoBank Touting Edge Computing

A recent article from CoBank is titled, Partnerships are Key for Rural Telecom Operators in Burgeoning Edge Computing Market. The article points out that there are potential opportunities for ISPs to grab a small piece of the edge computing market.

The article defines edge computing as a network architecture where data is stored and/or processed at locations close to where applications are being used. The growth of edge computing is an interesting phenomenon to watch because it reverse the trend of the last decade, where the goal was to move as much data as possible to large data centers and not process or store at the edge.

However, as the volume of data being generated by companies has increased exponentially, the tasks of moving data back and forth from data centers has added cost and time to the equation. Companies are looking deeper at the data they generate and are realizing that a lot of the data doesn’t need to be permanently cached at data centers. Companies also want to avoid the added latency from moving and processing things in a data center.

The article cites the following potential opportunities.

  • C-RAN. The trend identified is for cellular companies to process customer connection functions locally at cell sites instead of in the cloud. The opportunity for rural ISPs is to cell more connectivity to the expanding number of cell sites. However, is cellular data is processed locally, that would imply smaller transport bandwidth needed at each cell site.
  • Private Wireless Networks. The cellular carriers and companies like Microsoft and Amazon are likely to tackle this market. While there may be a few large customers in rural markets that want to participate in a private wireless network, the big opportunity is in selling the service to farms. Local ISPs can partner with one of the big application developers that will provide a communications suite for farms. The ISP opportunity will be selling transport to farms, but also possibly being hired to maintain farm wireless devices and monitors.
  • Internet of Things (IoT). This is the trend to develop smart sensors that can handle data locally without sending everything to a data center. Like with C-RAN, it seems like a stretch to see a role for a small rural ISP in the market other than perhaps being the local agent for the sensor devices.
  • Self-Driving Cars. The article relies on a prediction that a self-driving car will need to offload as much as 5,000 gigabits per hour of driving. I find it impossible to believe that anybody is going to invest in the network in rural areas that will ever serve this market. Most of the auto industry is chasing a future where vehicles will possess the needed computing power onboard rather than rely on somebody building a fiber network and billions of sensors along every mile of US highways. I was surprised to see this in the CoBank article since the chances of this happening seem slim.

The only opportunity on this list that might realistically materialize in the next decade and be a revenue opportunity for rural ISPs is private wireless networks for farms. It’s not hard to imagine a business relationship where rural ISPs become the local agent for smart farming connectivity and devices, in much the same way that many local ISPs were the agent for products like DirecTV. It’s not hard to imagine the rural ISP industry associations negotiating a contract for such services on behalf of members, making it easy to participate.

I was intrigued to see CoBank writing this article because bankers generally concentrate on opportunities that are either here today or on the immediate horizon. This article talks about pretty futuristic stuff. The question any rural ISP will ask is if any of these applications will ever become tangible and actionable. I remember a decade ago when the rage in the industry was telling rural ISPs that there was a lot of money to be made in fostering cellular offload to WiFi. I can’t think of anybody I know that ever made a nickel on the idea, but you couldn’t go to an industry meeting without somebody promoting the idea. There is a whole lot of steps that have to happen  before any of these edge-computing ideas turn into something that the average rural ISP can profitably participate in. But I have no doubt that some of the ideas in this article, or applications we haven’t thought of, will become real eventually. The one thing that rural ISPs have that is hard to duplicate is a local presence and local technical expertise.

Rural America is Losing Patience

From all across the country, I’m hearing that communities without broadband are tired of waiting for a broadband solution. Local broadband advocates and politicians tell me that folks with little or no broadband are hounding them about when they are going to see a broadband solution.

A large part of the frustration is that folks have heard that broadband is coming to rural America, but they aren’t seeing any local progress or improvement. A big part of the reason for this frustration is that folks aren’t being given realistic time frames for when they might see a solution. Politicians all gladly told the public that they had voted to solve rural broadband when the IIJA infrastructure legislation was enacted in November 2021. But almost nobody told folks the actual timelines that go along with the broadband funding.

Consider the timeline to build broadband as a result of various kinds of funding:

  • There was a recent round of ReConnect funding awarded. It generally takes 4- 6 months to get the paperwork straight after accepting a ReConnect award, and then grant winners have four years to build the network. Some of the folks in areas of the ReConnect awards that were recently awarded won’t get broadband until 2026. Most won’t see any broadband until 2024 or 2025.
  • The best timelines are with state broadband grants. Most of those awards require grant recipients to complete networks in two or three years. A lot of these grant programs were either recently awarded or will be awarded in the coming spring. The winners of these state grants will have until the end of 2024 or 2025 to complete the network construction, depending on the state and the specific grant.
  • The longest timeline comes from the FCC’s RDOF program. The FCC approved a lot of RDOF recipients in 2021 and again in 2022. A 2021 RDOF recipient has six years to build the full broadband solution – starting with 2022, the year after the award. A recipient of a 2021 RDOF award must build 40% of the network by the end of 2024, 60% of the deployment by the end of 2025, 80% of the network by the end of 2026, and 100% of the network by the end of 2027. At the end of 2027, the FCC will publish a final list of locations in the RDOF area, and the ISP has until the end of 2029 to reach any locations that have not already been covered. For an RDOF award made in 2022, add a year to each of these dates.
  • The big unknown is the giant $42.5 billion BEAD grants. We know that grant recipients will have four years to construct a network. But we don’t know yet when these grants might be awarded. It’s starting to look like grant applications might be due near the end of 2023 or even into 2024. This likely means grant awards in 2024. There will likely be an administrative pause for paperwork before the four-year time clock starts. My best estimate is that the bulk of BEAD construction will occur in 2025 and 2026, but BEAD grant projects won’t have to be completed until sometime in 2028 and maybe a little later in some cases.

In all cases, ISPs can build earlier than the dates cited above. ISPs realize that the longer they delay construction, the higher the likely cost of the construction. But some grants have built-in delays, such as having to complete an environmental study before any grant funds will be released. Many ISPs are going to suffer from supply chain issues with materials and labor and might not be able to speed up a lot.

The big problem is that people without good broadband want a solution now, not years from now. A family with a freshman in high school doesn’t want to hear that a broadband solution won’t reach them until after that student graduates from high school. People are getting frustrated by announcements from state and local politicians telling them a solution is coming – especially since most announcements aren’t being truthful about the possible timeline. Unfortunately, politicians like to deliver the good news but don’t want to be the ones to announce that faster broadband might reach folks between 2025 and 2028.

Folks are further frustrated when they hear that local governments are creating partnerships or giving grants to ISPs from ARPA funding – but again, with no immediate action or disclosures of the timeline. I am the last person in the world to give advice to local politicians – but I know if I didn’t have broadband at my home, I’d want to hear the truth about when it’s coming. This has to be tough for rural politicians who have negotiated partnerships with good ISPs but who know that a broadband solution is still likely years in the future.

Funding the Universal Service Fund

The FCC’s Universal Service Fund (USF) has been a mainstay in the telecom industry since it was created in 1997 as a result of the Telecommunications Act of 1996. That Act explicitly ordered the FCC to adopt the following universal service principles – all of the FCC’s actions with the USF are derived from these simple principles.

  • Promote the availability of quality services at just, reasonable, and affordable rates.
  • Increase nationwide access to advanced telecommunications services.
  • Advance the availability of such services to all consumers, including those in low-income, rural, insular, and high-cost areas, at rates that are reasonably comparable to those charged in urban areas.
  • Increase access to telecommunications and advanced services in schools, libraries, and rural health care facilities.
  • Provide equitable and non-discriminatory contributions from all providers of telecommunications services for the fund supporting universal service programs

The size of the USF has remained stable over the last decade, with USF disbursements in 2012 of $8.7 billion and 2021 disbursements of $8.5 billion. The USF is funded by an assessment on interstate telecommunications services. This includes a variety of telecommunications services – with the largest source of funding being assessed in landline telephone service, interstate long-distance, cellular and texting services, interstate private lines, and a host of smaller telecommunications services.

The USF is facing a big challenge because keeping the size of the USF stable has been an ever-increasing burden on those paying into the fund since interstate services have been steadily declining. To put this into perspective, the assessment on interstate services was 16.7% in the first quarter of 2017 and grew to 33% in the third quarter of 2022.

In the IIJA legislation, Congress ordered the FCC to take a fresh look at the USF. It ordered the agency to explore both the uses of the USF and the sources of USF funding.

The FCC reported back to Congress as ordered, and the FCC largely said that it still believes that its uses of the USF funding are appropriate and should continue into the future. There are critics of many of the functions funded by the USF, such as the 2020 RDOF reverse auction, but there is nearly universal support of programs like supporting broadband for schools and rural healthcare.

As part of the review of the USF, the FCC considered two funding ideas:

  • One of the easiest ways to spread the costs of funding the USF would be to expand the assessments to include broadband. This would lower the assessment rate from 33% to some tiny amount depending on how the assessment for broadband is calculated.
  • The FCC also explored the idea of assessing USF to “edge providers,” which are the large Internet firms such as streaming video providers, digital advertising firms, and cloud services companies – companies like Netflix, Facebook, and Amazon. This would shift most of the burden for funding the USF from the public to businesses – although businesses typically pass USF fees back to the public. But edge providers that offer free services like Facebook, Twitter, and Google search would have to eat the cost of such assessments.

The FCC decided not to take a position on the two funding ideas. The report to Congress said that the agency probably doesn’t have the authority to expand the USF assessment to these two groups of payers. The FCC thinks that Congress would have to act to change the method of USF assessments.

It will be interesting to see if Congress decides to take up the USF issue. One of the key issues facing Congress will be deciding if it wants to fund and continue the ACP program that provides a $30 discount from broadband for low-income homes. Congress assigned the operation of that fund to USAC, a non-profit organization that works under FCC guidance to operate the USF.

I have to think there are those in Congress that take exception to the FCC’s assessment that it is using the USF wisely. There are a lot of critics of recent programs like CAF II and RDOF. There have been plenty of critics of the Lifeline program, and I assume there are those against continuing the ACP low-income discounts. One of the risks that the FCC faces is that Congress might decide to resize or eliminate programs if it takes up the issue.