The Gigabit Wireless Controversy

One of the big controversies in the RDOF auction was that the FCC allowed three of the top ten grant winners to bid using gigabit wireless technology. This was Starry (Connect Everyone), Resound Networks, and Nextlink (AMG Technology). By bidding in the gigabit tier these technologies were given the same technology and dollar weighting as somebody bidding to build fiber-to-the-premise. There was a big outcry from fiber providers that claim that these bidders gained an unfair advantage because the wireless technology will be unable to deliver gigabit speeds in rural areas.

Fiber providers say that the bidding with gigabit wireless violates the intent of the grants. Bidding in the gigabit tier should mean that an ISP can deliver a gigabit product to every customer in an RDOF grant area. Customers don’t have to buy a gigabit product, but the capability to provide that speed to every customer must be there. This is something that comes baked-in with fiber technology – a fiber network can deliver gigabit speeds (or 10-gigabit speeds these days) to any one customer, or easily give it to all customers.

There is no denying that there is wireless technology that can deliver gigabit speeds. For example, there are point-point radios using millimeter-wave spectrum that can deliver a gigabit path for up to two miles or a multi-gigabit path for perhaps a mile. But this technology delivers the bandwidth to only a single point. This is the technology that Starry and others use in downtown areas to beam a signal from rooftop to rooftop to serve apartment buildings, with the bandwidth shared with all of the tenants in the building. This technology delivers up to a gigabit to a building, but something less to tenants. We have a good idea of what this means in real life because Starry publishes the average speed of its customers. In March 2021, the Starry website said that its average customer received 232 Mbps download and 289 Mbps up. That’s a good bandwidth product, but it is not gigabit broadband.

There is a newer technology that is more suited for areas outside of downtown metropolitan areas. Siklu has a wireless product that uses unlicensed spectrum in the V-band at 60 GHz and around 70 GHz. This uses a Qualcomm chip that was developed for the Facebook Terragraph technology. A wireless base station that is fiber-fed can serve up to 64 customers – but the catch is that the millimeter-wave spectrum used in this application travels only about a quarter of a mile. Further, this spectrum requires a nearly perfect line-of-sight.

The interesting feature of this technology is that each customer receiver can also retransmit broadband to make an additional connection. Siklu envisions a network where four or five hops are made from each customer to extend broadband around the base transmitter. Siklu advertises this product as being ideal for small-town business districts where a single fiber-fed transmitter can reach the whole downtown area through the use of the secondary beams. With a handful of customers on a system, this could deliver a gigabit wireless product. But as you start adding secondary customers, this starts acting a lot like a big urban apartment building, and the shared speeds likely start looking like what Starry delivers in urban areas – fast broadband, but that doesn’t meet the definition that every customer can receive a gigabit.

The real catch for this technology comes in the deployment. The broadband strength is pretty decent if every base transmitter is on fiber. But ISPs using the technology are likely going to cut costs by feeding additional base stations with wireless backhaul. That’s when the bandwidth starts to get chopped down. An RDOF winner would likely have to build a lot of fiber and have transmitters every mile to get the best broadband speeds – but if they dilute the backhaul by using wireless connections between transmitters, or spacing base station further apart, then speeds will drop significantly.

The other major issue with this technology is that it’s great for the small-town business district, but how will it overlay in the extremely rural RDOF areas? The RDOF grants cover some of the most sparsely populated areas in the country. The Siklu technology will be quickly neutered by the quarter-mile transmission distance when customers live more than a quarter-mile apart. Couple this with line-of-sight issues and it seems extremely challenging to reach a lot of the households in most RDOF areas with this technology.

I come down on the side of the fiber providers in this controversy. In my mind, an ISP doesn’t meet the grant requirements if they can’t reach every customer in an RDOF area. An ISP also doesn’t meet the gigabit grant requirements if only some customers can receive the gigabit speeds. That’s the kind of bait-and-switch we’ve had for years, thanks to the FCC that has allowed an ISP to bring fast broadband to one customer in a Census block and declare that everybody has access to fast speeds.

It’s a shame that I feel obligated to come to this conclusion because deployed well, these wireless technologies can probably bring decent broadband to a lot of homes. But if these technologies can’t deliver a gigabit to everybody, then the ISPs gained an unfair advantage in the RDOF grant bidding. When I look at the widely spaced home in many RDOF areas I can’t picture a wireless network that can reach everybody while also delivering gigabit capabilities. The only way to make this work would be to build fiber close to every customer in an RDOF area – and at that point, the wireless technology would be nearly as costly as FTTH and a lot more complicated to maintain. I think the FCC bought the proverbial pig-in-a-poke when they approved rural gigabit wireless.

The Birth of an Incumbent

Dish Networks wrote a recent letter to the FCC pointing out that T-Mobile had reversed its position over the last year on CBRS spectrum and other wireless issues. The opening paragraph of the letter contains the statement that is the genesis of today’s blog. Dish wrote, “As T-Mobile celebrates the one-year anniversary of its acquisition of Sprint, it is clear that the company’s worldview has transformed to that of an entrenched incumbent commensurate with its newfound size and scale”.

That sentence probably marks the date on which we should all start thinking of T-Mobile as an incumbent, with all that entails. In my mind, an incumbent in the telecom world is a carrier that acts like a monopoly. An incumbent does everything possible to maximize profits. Incumbents throw up barriers to entry to anybody that might compete with them.

The Dish letter points out that last behavior. T-Mobile had historically been a champion for opening up CBRS spectrum for rural use by small wireless companies. But as an incumbent, T-Mobile suddenly is against boosting power levels for CBRS that make it useful in a rural setting. This is a change of position that demonstrates that T-Mobile is not willing to accept even the slightest amount of interference from rural use of CBRS, even though the spectrum rules are written to minimize such interference.

T-Mobile is positioned to be an incumbent. In 2020, after the merger with Sprint, T-Mobile had almost 25% of the cellular market, ahead of Verizon at 24%, but still behind AT&T at 35%.

It’s an interesting change at T-Mobile considering its history in the US market. T-Mobile spent years touting itself as the Un-carrier under CEO John Legere. The company painted itself as the cellular carrier that looked out for the public with low prices, faster speeds, and better features – all different than what was offered by AT&T and Verizon. It was an interesting marketing posture and helped T-Mobile grow from an 11% market share a decade ago to 16% before the merger with Sprint.

Economists say that it’s inevitable that any company that gains market power will trend towards acting like a monopoly. This tendency isn’t due only to changes of behavior in the Boardroom, but rather happens from top to bottom in big companies as employees start taking steps to capitialize on company market advantages. Monopolies tend to reward employees for improving the bottom line, and things occur out of the eye of upper management. There is probably no better example of this than the many bizarre stories of overaggressive behavior from Comcast customer service. Much of this behavior has been blamed on regional service managers that took aggressive positions with the public to improve bonuses. The same thing was one of the primary causes for the behavior at Well’s Fargo where employees added unrequested accounts to customers as a way to earn sales bonuses.

If T-Mobile has indeed become a monopolist, and economic history suggests that’s inevitable, then this is a good reason for the country to oppose mergers that create monopolies. The cellular customers in the US will have been better off in the long run by having a hungry and separate T-Mobile and Sprint rather than letting them combine to create another monopoly.

There is no question that the cellular industry is controlled by the three monopolies of AT&T, T-Mobile, and Verizon. The next largest cellular carrier is US Cellular with barely more than 1% of the market. Dish will be trying to carve a niche in the market, but that’s not going to be easy when there are three incumbents pushing for policies and rules that maintain their market power.

Realistically, T-Mobile became an incumbent on the day of the merger with Sprint. It took less than a year for somebody to officially call out T-Mobile at the FCC as an entrenched incumbent.

Focus on Sustainability

There are a few glaring holes in all federal broadband grants that have to do with how a grant recipient uses the network that was constructed with grant dollars. I wrote a recent blog that talks about the fact that most grants surprisingly don’t have any mandate that the grant recipient serve any customers in the grant area. For example, Starlink could take a grant for western North Carolina but never sign a customer in the grant areas.

Even more amazingly, there is not any proof required that the grant money was all spent for the intended purposes in the grant areas. Consider the CAF II grants where the telcos self-report that they have completed the upgrades in each grant area – the telcos were not required to show any proof of the capital spending. A lot of people, including me, think that the big telcos didn’t make many of the required CAF II upgrades. The FCC has no idea if grant upgrades were really done. It would have been easy for the FCC to demand proof of capital expenditures showing the labor and specific equipment that was used in each of the grant areas. Such a requirement would have forced the telcos to do the needed work because it would be extremely easy for an FCC auditor to show up and ask to see some of the specific equipment that was claimed as installed.

Today’s blog talks about the third missing element of federal; grants – grant recipients don’t have to make any promise to maintain the networks after they are constructed. There is nothing to stop a grant recipient from taking the grant money, building the network, and then milking revenues for years without spending any future capital.

All of the industry experts will tell you that a new fiber network will likely be relatively problem-free after you shake out any initial problems. Unless fiber is cut, or unless customer electronics go bad, there is not a lot of maintenance capital required for the first decade after building a new fiber network. There will still be fiber cuts and storm damage and the inevitable things that happen in the real world, but fiber technology is so tried and true right now that it largely works well out of the box.

I wrote a blog recently that conjectured that a fiber network can be a hundred-year investment. But the key to longevity is maintenance. If a grant recipient treats a fiber network the way that the big telcos have treated copper networks, then new fiber networks will start deteriorating in ten years and will be dead in thirty years. Good maintenance means properly fixing fiber cuts with quality splices. It may mean replacing stretches of fiber that demonstrate ongoing problems that might have come from the factory or from improper handling during installation. But most importantly, maintenance means upgrading and replacing electronics.

Fiber electronics don’t last forever. Manufacturers talk about a 7-year life on electronics, but they are in the business of selling the replacements. There is no physical reason to replace customer electronics (ONT) as long as it keeps working, and we’ve already seen some customer electronics (fiber ONTs) last for as long as fifteen years. But my guess is that, on average, that electronics are going to require upgrades every ten or twelve years.

Luckily, it looks like many of the FTTP upgrades already on the market involve what we call an overlay. This means introducing a new core that can provide new customer electronics while still being able to support the old equipment, as long as it’s working well. This is the sane way to do upgrades because a company can phase customers from old electronics to new over many years rather than going through the chaotic process of trying to change technology for a lot of customers at the same time.

But back to the grants. Federal grants are going to turn out to be a total disaster if the companies receiving the grants don’t build what they are supposed to build and maintain the network to keep it running for a hundred years. This won’t become apparent for fifteen or twenty years, but then we’ll start hearing about big problems in rural areas where customers on poorly maintained fiber networks go out of service and can’t get repairs.

It really bothers me to know that there are bad ISPs in the industry who are likely to take the grant money with the intention of milking the revenues and not reinvesting in the networks. We know that cooperatives, small telco, and municipal network owners will be happily operating grant-funded fiber networks a century from now. But amazingly, sustainability isn’t part of the discussion or criteria in deciding which ISPs deserve grant funding. We continue to pretend that all ISPs are good corporate citizens even after some have proved repeatedly that they are not.

4G on the Moon

This blog is a little more lighthearted than my normal blog. An article in FierceWireless caught my eye talking about how Nokia plans to establish a 4G network on the Moon.

The primary purpose of the wireless technology will be to communicate between a base station and lunar rovers. 4G LTE is a mature and stable technology that can handle data transmission with ease – particularly in an environment where there won’t be any interference. While the initial communications will be limited to a base station and lunar rovers, the choice of 4G will make it easier to integrate future devices like sensors and astronaut cellphones into the network. NASA historically used proprietary communications gear, but it makes a lot more sense to use a communications platform that can easily communicate with a wide range of existing devices.

One challenge Nokia and NASA have to overcome on the moon is that the transmissions will be made between a low-sitting rover to a base station antenna that probably won’t be more than 3 – 5 meters off the ground. While there are no trees or other such obstacles on the moon, there are plenty of boulders and craters that will be a challenge for communications.

Nokia will have one benefit not available on earth – they can use the best spectrum band possible for the transmissions. They can establish wider data channels than are used on earth to accommodate more data within a transmission. Nobody has ever been handed a clean spectrum  slate to develop the perfect 4G system before, and Nokia engineers are probably having a good time with this.

The biggest challenge will be in designing a lightweight cellular base station that contains the core, the baseband, and the radios in a small box. All of the components must be hardened to work in wide-ranging temperatures on the moon, which can range from a high of 260 F in the daytime to minus 280 F in the dark.

Nokia engineers know they have to test, then retest the gear – there will be no easy repairs on the moon. The vision is that future lunar landings will touch down on the surface and then send off both manned and unmanned rovers to explore the moon’s surface. The 4G gear must survive the rigors of an earth launch, a moon landing, and the vibrations and jolts from rovers and still be guaranteed to always work in the desolate lunar environment

I have to admit that my first reaction to the article was, “Shouldn’t we be putting 5G on the moon?”. But then it struck me. There is no 5G anywhere in the world other than the marketing product that cellular carriers call 5G. Since there will be no easy upgrades in space, Nokia engineers are being honest in calling for 4G LTE. Honestly labeling this as 4G will remind future engineers and scientists about the technology being used. Wouldn’t it be refreshing if Nokia was as honest about the 5G in our terrestrial cellular networks?

Next Generation PON is Finally Here

For years, we’ve been checking the prices of next-generation passive optical network (PON) technology as we help clients consider building a new residential fiber network. As recently as last year there was still at least a 15% or more price penalty for buying 10 Gbps PON technology using the NG-PON2 or XGS-PON standards. But recently we got a quote for XGS-PON that is nearly identical in price to buying the GPON that’s been the industry standard for over a decade.

New technology is usually initially more expensive for two reasons. Manufacturers hope to reap a premium price from those willing to be early adapters. You’d think it would be just the opposite since the first buyers of new technology are the guinea pigs who have to help debug all of the inevitable problems that crop up in new technology. But the primary reason that new technology costs more is economy of scale for the manufacturers – prices don’t drop until manufacturers start manufacturing large quantities of a new technology.

The XGS-PON standard provides a lot more bandwidth than GPON. The industry standard GPON technology delivers 2.4 Gbps download and 1 Gbps upload speed to a group of customers – most often configured at 32 passings. XGS-PON technology delivers 10 Gbps downstream and 2.5 Gbps upstream to the same group of customers—a big step up in bandwidth.

The price has dropped for XGS-PON primarily due to its use by AT&T in the US and Vodaphone in Europe. These large companies and others have finally purchased enough gear to drive down the cost of manufacturing.

The other next-generation PON technology is not seeing the same price reductions. Verizon has been the only major company pursuing the NG-PON2 standard and is using it in networks to support large and small cell sites. But Verizon has not been building huge amounts of last-mile PON technology and seems to have chosen millimeter-wave wireless technology as the primary technology for reaching into residential neighborhoods. NG-PON2 works by having tunable lasers that can function at several different light frequencies. This would allow more than one PON to be transmitted simultaneously over the same fiber but at different wavelengths. This is a far more complex technology than XGS-PON, which basically has faster lasers than GPON.

One of the best features of XGS-PON is that some manufacturers are offering this as an overlay onto GPON. An overlay means swapping out some cards in a GPON network to provision some customers with 10 Gbps speeds. An overlay means that anybody using GPON technology ought to be able to ease into the faster technology without a forklift upgrade.

XGS-PON is not a new technology and it’s been around for around five years. But the price differential stopped most network owners from considering the technology. Most of my clients tell me that their residential GPON networks average around 40% utilization, so there have been no performance reasons to need to upgrade to faster technology. But averages are just that and some PONs (neighborhood nodes) are starting to get a lot busier, meaning that ISPs are having to shuffle customers to maintain performance.

With the price difference finally closing, there is no reason for somebody building a new residential network to not buy the faster technology. Over the next five years as customers start using virtual reality and telepresence technology, there is likely to be a big jump up in bandwidth demand from neighborhoods. This is fueled by the fact that over 9% of homes nationwide are now subscribing to gigabit broadband service – and that’s enough homes for vendors to finally roll out applications that can use gigabit speeds. I guess the next big challenge will be in finding 10 gigabit applications!

Build It and They Will Fill It

Early in my career as a consultant, I advised clients to not adopt the philosophy of “build it and they will come”. Fifteen years ago, when fiber networks were first being built to residential communities, I had clients who were so enamored with fiber technology that they couldn’t imagine that almost every household wouldn’t buy broadband from a new fiber network.

I saw clients invest in fiber networks and take bank loans based upon irrationally high customer penetration rates, with no basis for their projections other than hope. Fiber overbuilders who counted on everybody taking fiber were inevitably disappointed, and over time I saw most fiber builders become more realistic about penetration rates and engage in surveys and pre-sales efforts to get a better idea of how well they would do.

Interestingly, I’m seeing this same concept creep back into the industry. This time it has to do with building middle-mile transport fiber. I have heard the phrase ‘build it and they will fill it’ a number of times over the last few years. There are examples of fiber transport routes being subscribed quickly, and the exuberance from a few such examples has some fiber builders believing that they can’t fail in building transport fiber.

Unfortunately, for every fiber route that is a huge success, I can point to a dozen fiber routes that languish with little traffic. As it turns out, middle-mile fiber is probably the one product in our industry that best illustrates the classic economics of supply and demand.

Buyers of middle-mile transport have explicit needs to get from point A to point B. If a given fiber route can be part of such a solution, then they will consider buying transport. But buyers of transport usually consider all of the alternatives to buying on a given fiber route – there are almost always alternatives. I know one case where three different carriers built fiber to reach a large rural data center. This instantly created price competition and none of the carriers are seeing the revenues they hoped for when building the fiber.

Some of the companies that buy transport will also consider building fiber rather than buying dark fiber of lit bandwidth. Verizon is probably the best example of this – they seem to have an internal formula that determines when building is better than leasing. Even worse for fiber owners, once Verizon builds fiber it is instantly competing with the existing fiber.

Companies that lease fiber also have to deal with other issues. The ideal long-haul fiber route has a minimal number of POPs, and some carriers avoid routes with too many stopping points. Intermediate stopping points and POPs increase electronics costs and maintenance costs and each electronics site degrades the light signal a bit.

I advise that anybody building transport fiber needs to have an iron-clad reason the justifies building a specific route – even if there are no other revenues. If the carrier can’t enter a new market without the new transport, then the route is mandatory. But a carrier ought to have already lined up enough basic revenues to justify building a non-mandatory transport route. If one major fiber tenant pays enough to recover the cost of building the route, then it might be a good risk.

The same advice to be careful applies whether a route connects major cities or goes to rural areas. I remember years ago helping a client find a connection between Dallas and Kansas City and we found seven separate fibers that made the connection. This level of overbuilding drops the lease price for the route.

We had an interesting national experiment over a decade ago in building a lot of middle-mile fiber to rural communities that were funded by the ARRA Stimulus grants. A lot of the fiber built with those grants was pure middle-mile transport, with only a few stops along the routes to serve a handful of rural anchor institutions. Looking back a decade later is a great example of today’s topic. Many of the ARRA routes have attracted almost no interest even after a decade. Some routes built with the grants are doing well and gained transport sales to cellular carriers and to ISPs wanting to serve the last mile. It’s a challenge when comparing the winners and losers among those routes to understand why some rural routes attracted transport customers while other similar routes have not.

Leasing transport in rural markets is a tough business. The big wireless carriers like Verizon and AT&T have grown increasingly leery of entering into long-term fiber leases. Carriers that want to reach small rural towns to provide last mile fiber can’t afford to pay a lot for transport. Many WISPs are notoriously overextended and can’t afford expensive leases. While school systems might lease fiber for a while, they are always looking for grants to build and own the routes directly. The bottom line is that if you build it, there is no guarantee they will fill it.

Cost Models and Grants

Possibly the least understood aspect of the recent FCC RDOF grants is that the FCC established the base amount of grant for every Census block in the grant using a cost model. These cost models estimate the cost of building a new broadband network in every part of the country – and unfortunately, the FCC accepts the results of the cost models without question.

The FCC contracts with CostQuest Associates to create and maintain the cost estimation models. The cost models have been used in the past in establishing FCC subsidies, such as Universal Service Fund payments made to small telephone companies under the ACAM program. For a peek into how the cost models work, this link is from an FCC docket in 2013 when the small telcos challenged some aspects of the cost models. The docket explains some of the basics about of the cost model functions.

This blog is not meant to criticize CostQuest, because no generic nationwide cost model can capture the local nuances that impact the cost of building fiber in a given community. It’s an impossible task. Consider the kinds of unexpected things that engineers encounter all of the time when designing fiber networks:

  • We worked in one county where the rural utility poles were in relatively good shape, but the local electric company hadn’t trimmed trees in decades. We found the pole lines were now 15 feet inside heavy woods in much of the fiber construction area.
  • We worked in another county where 95% of the county was farmland with deep soil where it was inexpensive to bury fiber. However, a large percentage of homes were along a river in the center of the county that consisted of steep, rocky hills with old crumbling poles.
  • We worked in another county where many of the rural roads were packed dirt roads with wide water drainage ditches on both sides. However, the county wouldn’t allow any construction in the ditches and insisted that fiber be placed in the public right-of-way which was almost entirely in the woods.

 

Every fiber construction company can make a long list of similar situations where fiber construction costs came in higher than expected. But there are also cases where fiber construction costs are lower than expected. We’ve worked in farm counties where road shoulders are wide, the soil is soft, and there are long stretches between driveways. We see electric cooperatives that are putting ADSS fiber in the power space for some spectacular savings.

Generic cost models can’t keep up with the fluctuations in the marketplace. For example, I saw a few projects where the costs went higher than expected because Verizon fiber construction had lured away all local work crews for several years running.

Cost models can’t possibly account for cases where fiber construction costs are higher or lower than what might be expected in a nearby county with seemingly similar conditions. No cost model can keep up with the ebb and flow of the availability of construction crews or the impact on costs from backlogs in the supply chain.

Unfortunately, the FCC determines the amount to be awarded for some grants using these cost models, such as the recently completed RDOF grants. The starting bid for each Census block in the RDOF auction was determined using the results of the cost models – and the results make little sense to people that understand the cost of building fiber.

One might expect fiber construction costs to easily be three or four times higher per mile in parts of Appalachia compared to the open farmland plains in the Midwest. However, the opening bids for RDOF were not as proportionately higher for Appalachia than what you might expect. The net results are that grants offered a higher percentage of expected construction cost is the open plains compared to the mountains of Appalachia.

There is an alternative to using the cost models – a method that is used by many state grants. Professional engineers estimate construction costs and many state grants then fund some percentage of the grant cost based upon factors like the technology to be constructed. This kind of grant would offer the same percentage of grant assistance in all different geographies of a state. Generic cost models end up advantaging or disadvantaging grant areas, without those accepting the grants even realizing it. The RDOF grants offered drastically different proportions of the cost of construction – which is unfair and impossible to defend. This is another reason to not use reverse auctions where the government goofs up the fairness of the grants before they are even open for bidding.

The White House Broadband Plan

Reading the White House $100 billion broadband plan was a bit eerie because it felt like I could have written it. The plan espouses the same policies that I’ve been recommending in this blog. This plan is 180 degrees different than the Congress plan that would fund broadband using a giant federal, and a series of state reverse auctions.

The plan starts by citing the 1936 Rural Electrification Act which brought electricity to nearly every home and farm in America. It clearly states that “broadband internet is the new electricity” and is “necessary for Americans to do their jobs, to participate equally in school learning, health care, and to stay connected”.

The plan proposes to fund building “future proof’ broadband infrastructure to reach 100 percent broadband coverage. It’s not hard to interpret future proof to mean fiber networks that will last for the rest of the century versus technologies that might not last for more than a decade. It means technologies that can provide gigabit or faster speeds that will still support broadband needs many decades from now.

The plan wants to remove all barriers so that local governments, non-profits, and cooperatives can provide broadband – entities without the motive to jack-up prices to earn a profit. The reference to electrification implies that much of the funding for modernizing the network might come in the form of low-interest federal loans given to community-based organizations. This same plan for electrification spurred the formation of electric cooperatives and would do something similar now. I favor this as the best use of federal money because the cost of building the infrastructure with federal loans means that the federal coffers eventually get repaid.

The plan also proposes giving tribal nations a say in the broadband build on tribal lands. This is the third recent funding mechanism that talks about tribal broadband. Most Americans would be aghast at the incredibly poor telecom infrastructure that has been provided on tribal lands. We all decry the state of rural networks, but tribal areas have been provided with the worst of the worst in both wired and wireless networks.

The plan promotes price transparency so that ISPs must disclose the real prices they will charge. This means no more hidden fees and deceptive sales and billing practices. This likely means writing legislation that gives the FCC and FTC some real teeth for ending deceptive billing practices of the big ISPs.

The plan also proposes to tackle broadband prices. It notes that millions of households that have access to good broadband networks today can’t use broadband because “the United States has some of the highest broadband prices among OECD countries”. The White House plan proposes temporary subsidies to help low-income homes but wants to find a solution to keep prices affordable without subsidy. Part of that solution might be the creation of urban municipal, non-profit, and cooperative ISPs that aren’t driven by profits or Wall Street earnings. This goal also might imply some sort of federal price controls on urban broadband – an idea that is anathema to the giant ISPs. Practically every big ISP regulatory policy for the last decade has been aimed at keeping the government from thinking about regulating prices.

This is a plan that will sanely solve the rural broadband gap. It means giving communities time to form cooperatives or non-profits to build broadband networks rather than shoving the money out the door in a hurry in a big reverse auction. This essentially means allowing the public to build and operate its own rural broadband – the only solution I can think of that is sustainable over the long-term in rural markets. Big commercial ISPs invariably are going to overcharge while cutting services to improve margins.

Giving the money to local governments and cooperatives also implies providing the time to allow these entities to be able to do this right. We can’t forget that the electrification of America didn’t happen overnight and it took some communities as more than a decade to finally build rural electric networks. The whole White House infrastructure plan stretches over 8 – 10 years – it’s an infrastructure plan, not an immediate stimulus plan.

It’s probably obvious that I love this plan. Unfortunately, this plan has a long way to go to be realized. There is already proposed Congressional legislation that takes nearly the opposite approach, and which would shove broadband funding out of the door within 18 months in a gigantic reverse auction. We already got a glimpse of how poorly reverse auctions can go in the recently completed RDOF auction. I hope Congress thinks about the White House plan that would put the power back into the hands of local governments and cooperatives to solve the broadband gaps. This plan is what the public needs because it creates broadband networks and ISPs that will still be serving the public well a century from now.

A Surprise

I think my biggest industry surprise of the last year happened recently when I opened the front door and found that a new yellow page directory had been placed on my porch. I haven’t received a yellow pages directory for the last seven years living in the US or the decade before that living in the Virgin Islands. I hadn’t given it much thought, but I thought the yellow pages were dead.

The yellow pages used to be a big deal. Salespeople would canvass every business in a community and sell ads for the annually produced book. I remember when living in Maryland that the Yellow Pages was at least three inches thick just for the Maryland suburbs of DC and that there were similar volumes for different parts of the DC metropolitan area.

Wikipedia tells me that the yellow pages were started by accident in Cheyenne, Wyoming in 1883 when a printer ran out of white paper and used yellow in printing a directory. The idea caught on quickly and Reuben H. Donnelley printed the first official Yellow Pages directory in 1886.

Yellow Page directories became important to telephone companies as a significant source of revenue. The biggest phone companies produced their directories internally through a subsidiary. For smaller telcos, the yellow page ads were sold, and directories were printed by outside vendors like Donnelley that shared ad revenues with the phone company. The revenue stream became so lucrative in the 1970s and 1980s that many medium-sized telephone companies took the directory function in-house – only to and found out how hard it was to sell ads to every business in a market. The market for yellow pages got so crazy that competing books were created for major metropolitan markets.

Yellow pages were a booming business until the rise of the Internet. The Internet was supposed to replace the yellow pages. The original yellow pages vendors moved the entire yellow page directories online, but this was never a big hit with the public. It was so much easier to leaf through a directory, circle numbers of interest, and take notes in a paper copy of the directory than it was to scroll through pages of listings online.

Merchants always swore that yellow page ads were effective. A merchant that was creative in getting listed in the right categories would get calls from all over a metropolitan area if they sold something unique.

Of course, there was also a downside to yellow pages. The yellow paper and the glue used to bind the thick books meant that the paper wasn’t recyclable. This meant a huge pile of books ended up in landfills every year when the new books were delivered. After the directories lost some of their importance, many cities required that directories were only delivered to homes that asked for them to reduce the huge pile of paper in the landfills.

Yellow pages are just another aspect of telephony that has largely faded away. There was a time that you saw yellow pages sitting somewhere near the main telephone in every home you visited. It’s something that we all had in common – and it’s something that the consumer found to be invaluable. A new business knew they had made it when they saw their business first listed in the yellow pages.

The Accessible, Affordable Internet Act for All – Part 2

This is the second look at the Accessible, Affordable Internet Act for All sponsored by Rep. James E. Clyburn from South Carolina and Sen. Amy Klobuchar from Minnesota. The first blog looked at the problems I perceive from awarding most of the funding in a giant reverse auction.

In a nutshell, the bill provides $94 billion for broadband expansion. A huge chunk of the money would be spent in 2022, with 20% of the biggest funds deferred for four years. There are other aspects of the legislation worth highlighting.

One of the interesting things about the bill is the requirements that are missing. I was surprised to see no ‘buy American’ requirement. While this is a broadband bill, it’s also an infrastructure bill and we should make sure that infrastructure funding is spent as much as possible on American components and American work crews.

While the bill has feel-good language about hoping that ISPs offer good prices, there is no prohibition that I can find against practices like data caps imposed in grant-funded areas that can significantly increase monthly costs for a growing percentage of households.

The most dismaying aspect of the bill that is missing is the idea of imposing accountability on anybody accepting the various federal grant funds. Many state grant programs come with significant accountability. ISPs must often submit proof of construction costs to get paid. State grant agencies routinely visit grant projects to verify that ISPs are building the technology they promised. There is no such accountability in the grants awarded by this bill, just as there was no accountability in the recent RDOF grants or the recently completed CAF II grants. In the original CAF II, the carriers self-certify that the upgrades have been made and provide no back-up that the work was done other than the certification. There is a widespread belief that much of the CAF II upgrades were never done, but we’ll likely never know since the telcos that accepted the grants don’t have any reporting requirements to show that the grant money was spent as intended.

There is also no requirement to report the market success of broadband grants. Any ISPs building last-mile infrastructure should have to report the number of households and businesses that use the network for at least five years after construction is complete. Do we really want to spend over $90 billion for grants without asking the basic question of whether the grants actually helped residents and businesses?

This legislation continues a trend I find bothersome. It will require all networks built with grant funding to offer a low-income broadband product – which is great. But it then sets the speed of the low-income service at 50/50 Mbps while ISPs will be required to provide 100/100 Mbps or faster to everybody else. While it’s hard to fault a 50/50 Mbps product today, that’s not always going to be the case as homes continue to need more broadband. I hate the concept that low-income homes get slower broadband than everybody else just because they are poor. We can provide a lower price without cutting speeds. ISPs will all tell legislators that there is no difference in cost in a fiber network between a 50/50 Mbps and a 100/100 Mbps service. This requirement is nothing more than a backhanded way to remind folks that they are poor – there is no other reason for it that I can imagine.

One of the interesting requirements of this legislation is that the FCC gathers consumer prices for broadband. I’m really curious how this will work. I studied a market last year where I gathered hundreds of customer bills and I found almost no two homes being charged the same rate for the same broadband product. Because of special promotional rates, negotiated rates, bundled discounts, and hidden fees, I wonder how ISPs will honestly answer this question and how the FCC will interpret the results.

The bill allocates a lot of money for ongoing studies and reports. For example, there is a new biennial report that quantifies the number of households where cost is a barrier to buying broadband. I’m curious how that will be done in any meaningful way that will differ from the mountains of demographic data that show that broadband adoption has almost a straight-line relationship to household income. I’m not a big fan of creating permanent report requirements for the government that will never go away.