Powering the Future

For years there have been predictions that the world would be filled with small sensors that would revolutionize the way we live. Five years ago, there were numerous predictions that we’d be living in a cloud of sensors. The limitation on realizing that vision has been figuring out how to power sensors and the other electronics. Traditional batteries are too expensive and have a limited life. As you might expect, scientists from around the world have been working on better power technologies.

Self-Charging Batteries. The California company NDB has developed a self-charging battery that could remain viable for up to 28,000 years. Each battery contains a small piece of recycled radioactive carbon-14 that comes from recycled nuclear fuel rods. As the isotope decays, the battery uses a heat sink of lab-created carbon-12 diamond which captures the energetic particles of decay while acting as a tough physical barrier to contain the radiation.

The battery consists of multiple layers of radioactive material and diamond and can be fashioned into any standard batter size like a AAA. The overall radiation level of the battery is low – at less than the natural radiation emitted by the human body. Each battery is effectively a small power generator in the shape of a traditional battery that never needs to be recharged. One of the most promising aspects of the technology is that nuclear power plants pay NDB to take the radioactive material.

Printed Flexible Batteries. Scientists at the University of California San Diego have been researching batteries that use silver-oxide zinc chemistry. They’ve been able to create a flexible device that offers 10-times the energy density of lithium-ion batteries. The flexible material means that batteries can be shaped to fit devices instead of devices designed to fit batteries.

Silver–zinc batteries have been around for many years, and the breakthrough is that the scientists found a way to screen print the battery material, meaning a battery can be placed onto almost any surface. The printing process paints in a vacuum and layers on the current collectors, zinc anode, the cathode, and separator layers to create a polymer film that is stable up to almost 400 degrees Fahrenheit. The net result is a battery with ten times the power output of a lithium-ion battery of the same size.

Anti-Lasers. Science teams from around the world have been working to create anti-lasers. A laser operates by beaming protons while an anti-laser sucks up photons from the environment. An anti-laser can be used in a laptop or cellphone to collect photons and use them to power the battery in the device.

The scientific name for the method being used is coherent perfect absorption (CPA). In practice, this requires one device that beams out a photon light beam and devices with CPA technology to absorb the beams. In the laboratory, scientists have been able to capture as much as 99.996% of the transmitted power, making this more energy-efficient than plugging a device into electric power. There are numerous possible uses for the technology, starting with the obvious ability to charge devices that aren’t plugged into electricity. But the CPA devices have other possible uses. For example, the devices are extremely sensitive to changes in photons in a room and could act as highly accurate motion sensors.

Battery-Free Sensors. In the most creative solution I’ve read about, MIT scientists started a new firm, Everactive, and have developed sensors that don’t require a battery or external power source. The key to the Everactive technology is the use of ultra-low power integrated circuits which are able to harvest energy from sources like low-light sources, background vibrations, or small temperature differentials.

Everactive is already deploying sensors in applications where it’s hard to change sensors, such as inside steam-generating equipment. The company also makes sensors that monitor rotating machinery and that are powered by the vibrations coming from the machinery. Everactive says its technology has a much lower lifetime cost than traditionally powered sensors when considering the equipment downtime and cost required to periodically replace batteries.

The Fiber Backlog

One of the issues facing new fiber projects in 2021 is the backlog and slow ordering time for fiber cable. I’ve heard recently from clients that have been told it will take them from four to nine months to get new fiber.

Part of this delay can be blamed on the pandemic as factories and shippers everywhere have gotten turned sideways during the pandemic. We saw a big slowdown after the first quarter of 2020 for electronics that was due to the pandemic. Some of this was due to the fact that the Wuhan province in China is where a lot of optical electronics are manufactured – and which also was ground zero for the pandemic. Electronics production in Wuhan ground to a quick stop when the local government responded to the pandemic with a total and prolonged shutdown.

The local backlog in Wuhan eventually cleared, but the industry started looking for workarounds. Many of the vendors that were relying on factories in Wuhan moved part of the manufacturing to other countries as a hedge against having all manufacturing located in one concentrated area. This wasn’t easy to do during the pandemic. That’s a shift that has been due for years because eventually, something would have happened locally in Wuhan to pinch the supply chain – be it this pandemic, major weather events, or politics. I think many vendors learned a lesson and are going to diversify their supply chain in the future. This is going to cost a lot of business for Wuhan but will be better for the rest of the world. I’m hoping that at least some of this manufacturing finds its way back to the US – the fact that electronics are all made overseas feels like a national security issue to me.

But the backlog in fiber preceded the pandemic – there was already a backlog before the beginning of 2020. The pandemic added to the backlog, but it’s something that was already building. The backlog in fiber seems like more of a traditional supply and demand issue.

The world has been building fiber at an astonishing and accelerating pace. Just in this country, there are fiber projects everywhere. There are a few big companies like Verizon that have been buying huge quantities of fiber. For example, Verizon announced in 2017 that it was going to buy over $1 billion in fiber from Corning over a few years – up to 12.4 million miles of cable. But seemingly everybody else is also building fiber. Until the pandemic curtailed my travel, it seems like I saw fiber construction crews almost everywhere I went. Just a few years earlier, spotting a fiber crew was a rarity.

There is definitely a backlog in fiber, but the backlog is far more pronounced for smaller fiber buyers and at the maximum for new fiber buyers. This is where normal supply and demand kicks in. A company like Corning is always going to put Verizon at the front of the delivery queue. The largest buyers like Verizon worry about not having enough fiber and so they place large orders that eat up the capacity at factories. When there is word of supply chain problems and shortages, the big companies like Verizon order even more fiber to be safe.

This creates a shortage at the manufacturer which can’t pledge the extra fiber to anybody else. Over time, as the big companies don’t take delivery of all of the fiber, the excess enters the supply chain for everybody else. This creates a fluctuation in the supply that the manufacturer can’t predict. To some degree, much of the perceived shortage is artificial and is a result of fiber being allocated to the biggest buyers. The shortages start to look really long when these market fluctuations get layered on top of real shortages and slowdowns like happened during the early days of the pandemic.

The current shortage is probably not as bad as what buyers are being told by suppliers. Somebody being promised fiber in nine months will likely get it in six, and those being told six months will probably get it in four months. But those are still historically long waits for fiber.

There is not a whole lot that a new fiber buyer can do about the situation. Big carriers buy directly from the manufacturers and it’s not likely that Verizon and other big buyers are waiting long for fiber. Everybody else in the industry buys fiber through wholesale supply houses, and these are the ones seeing the biggest impact from the yoyoing supply. Just like the manufacturers take care of the huge buyers, a supply house takes care of its long-time buyers first, so small and new fiber buyers are at the end of the supply chain. In a true shortage, like the one that happened years ago when one of the major fiber factories burned down, the smallest buyers might not even be able to get fiber.

This current shortage will eventually clear and the market will return to normal – it always does. But for 2021 and even beyond, a new fiber buyer needs to order early or face sitting around waiting on fiber.

Building Rural Coaxial Networks

Charter won $1.22 billion in the RDOF grant auction and promised on the short-form to build gigabit broadband. Charter won grant areas in 24 states, including being the largest winner in my state of North Carolina. I’ve had several people ask me if it’s possible to build rural coaxial networks, and the answer is yes, but with some caveats.

Charter and other cable companies use hybrid fiber-coaxial (HFC) technology to deliver service to customers. This technology builds fiber to neighborhood nodes and then delivers services from the nodes using coaxial copper cables. HFC networks follows a standard called DOCSIS (Data Over Cable Interface Specification) that was created by CableLabs. Charter currently uses the latest standard of DOCSIS 3.1 that easily allows for the delivery of gigabit download speeds, but something far slower for upload.

There are several distance limitations of an HFC network that come into play when deploying the technology in rural areas. First, there is a limitation of roughly 30 miles between the network core and a neighborhood node. The network core in an HFC system is called a CMTS (cable modem terminating system). In urban markets, a cable company will usually have only one core, and there are not many urban markets where 30 miles is a limiting factor. But 30 miles becomes a limitation if Charter wants to serve the new rural areas from an existing CMTS hub that would normally be located in larger towns or county seats. In glancing through the rural locations that Charter won, I see places that are likely going to force Charter to establish a new rural hub and CMTS. There is new technology available that allows a small CMTS to be migrated to the field, and so perhaps Charter is looking at this technology. It’s not a technology that I’ve seen used in the US, and the leading manufacturers of small CMTs technology are the Chinese electronics companies that are banned from selling in the US. If Charter is going to reach rural neighborhoods, in many cases they’ll have to deploy a rural CMTS in some manner.

The more important distance limitation is in the last mile of the coaxial network. Transmissions over an HFC network can travel about 2.5 miles without needed an amplifier. 2.5 miles isn’t very far, and amplifiers are routinely deployed to boost the signals in urban HFC networks. Engineers tell me that the maximum number of amplifiers that can be deployed is 5, and beyond that number, the broadband signal strength quickly dies. This limitation means that the longest run of coaxial cable to reach homes is about 12.5 miles. That’s 12.5 miles of cable, not 12.5 miles as the crow flies.

To stay within the 12.5-mile limit, Charter will have to deploy a lot of fiber and create rural nodes that might serve only a few homes. This was the same dilemma faces by the big telcos when they were supposed to upgrade DSL with CAF II money – the telcos needed to build fiber deep into rural areas to make it work. The telcos punted on the idea, and we now know that a lot of the CAF II upgrades were never made.

Charter faces another interesting dilemma in building a HFC network. The price of copper has steady grown over the last few decades and copper now costs four times more than in 2000. This means that the cost of buying coaxial cable in relatively expensive (a phenomenon that anybody building a new house knows when they hear the price of new electrical wires). It might make sense in a rural area to build more fiber to reduce the miles of coaxial cable.

Building rural HFC makes for an interesting design. There were a number of rural cable systems built sixty years ago at the start of the cable industry, because these were the areas in places like Appalachia that had no over-the-air TV reception. But these early networks carried only a few channels of TV, meaning that the distance limitations were a lot less critical. But there have been few rural cable networks built in more recent times. Most cable companies have a metric where they won’t build coaxial cable plant anywhere with fewer than 20 homes per road mile. The RDOF grant areas are far below that metric, and one has to suppose that Charter thinks that the grants make the math work.

To answer the original question – it is possible to build rural coaxial networks that can deliver gigabit download speeds. But it’s also possible to take some shortcuts and overextend the amplifier budget and curtail the amount of bandwidth that can be delivered. I guess we’ll have to wait a few years to see what Charter and others will do with the RDOF funding.

 

$100 Broadband

Advocates for digital inclusion have shown that the primary reason that many homes don’t buy broadband is price – homes can’t afford the broadband from the big cable companies. The title of this blog is ‘$100 Broadband’ because we’re on a trajectory for that to become the normal price of broadband in just a few years.

Broadband is already expensive, and the cable companies are now in the mode of raising rates every year. Consider the prices already charged today by Comcast and Charter.

The Comcast basic ‘Performance’ broadband product is priced at $76 starting on January 1, an increase of $3. To go along with this, Comcast is now charging $14 per month for a modem, an increase of $1 per month. This means somebody who is not receiving special pricing or who is not in a bundle is now paying $90 per month for basic broadband. That rate doesn’t include the extra fees being levied on households that exceed the monthly 1.2 terabyte data cap. If Comcast continues to increase rates by $4 per year, then they’ll be at a $98 rate in 2023 and have a base rate of $102 in 2024.

Not all Comcast customers pay this full rate today, but many eventually will. New customers who have switched from DSL probably have special low introductory rates that revert to the list price after a one or two-year contract. A large percentage of Comcast customers pay less than the list price through bundling. Nobody with a bundle knows what they pay for broadband, but they quickly find out that they are expected to pay the list price if they dare to cut the cord and break the bundle.

Charter is not as expensive as Comcast. The company just raised the rates on December 1 for its basic broadband product by $5 to reach a rate of $74.99. In addition, Charter charges $5 for a modem, bringing the standalone price for broadband to $79.99. At a $5 annual rate increase, the company will achieve $100 rates in four years. In addition, Charter has petitioned the FCC to allow it to bill for data caps – something that will substantially increase the rates for homes that are likely working or that have students at home.

I am certain that most consumers don’t know the full price of broadband. My consulting firm does residential surveys and in a few recent surveys in Comcast markets, the average Comcast customer thinks broadband costs around $70. This speaks to the power of hidden fees where the average customer doesn’t associate the ridiculously high $14 modem rate as being a broadband charge. Comcast and the other big ISPs have mastered the art of confusing customers by billing practices that make it hard for customers to see the price of a given product.

Comcast also hides its rates from the general public. If you don’t believe me, search for Comcast rates on the web – all you’ll easily find are the rates being charged to customers that switch from DSL. You won’t find the company talking about its actual rates outside of small-print footnotes – and even the small print won’t mention the modem charge.

I predict that the cable companies are going to start quietly cutting back on special pricing and bundling discounts. Those discounts no longer make competitive sense in markets where the only other competitor is telco DSL. AT&T recently announced it will not be connecting new DSL customers, meaning that a cable company likely has no competition in markets where AT&T is the telco. But the cable companies have largely obliterated DSL in almost every market. It has to be dawning on cable companies that they have won the broadband war and they no longer have to give away deep discounts to get and keep customers. The cable companies are now de facto monopolies in most markets, and they will start acting like monopolies. And that means charging full price for services among other things.

Right now, the FCC has no authority over broadband prices since the agency wrote itself out of the broadband regulation business. But if the FCC never discourages cable companies from continually raising rates, we’re going to be looking at rates of $150 per household in a decade. Monopolies are going to keep raising rates until a regulator steps in and tells them to knock off the nonsense.

The Cost of Using Poles

The Georgia Public Service Commission (GPSC) passed a rule recently that reduces the cost of pole attachments to $1 per year per pole for anybody that builds broadband in areas of the state that the state considers to be unserved. They titled this the One Buck Deal. The state has created its own broadband map that undoes many of the errors in the FCC’s broadband maps and shows that over 500,000 rural homes don’t have broadband.

I really don’t mean to detract from any effort to make it easier to build rural fiber – but pole attachment fees are not what is stopping companies from building rural fiber. It easy to understand how regulators got this idea because the big ISPs have been screaming about pole attachment fees for years. And at the national level, the biggest fiber builders have claimed that pole attachment fees are an impediment.

From an operating perspective, annual pole attachment fees are a relatively minor cost for most network owners. The biggest expenses for operating a new fiber project are labor and interest on debt. Other big expenses include the cost of the Internet backbone, billing, and marketing. Pole attachments fall far down the list, and for most projects I’ve worked with, the cost of pole attachments is rarely more than a percent or two of total operating expenses. While the GPSC gesture of reducing these fees would be welcome to a fiber overbuilder, avoiding 1% of operating costs isn’t going to move the needle on any business plan.

The biggest cost of deploying fiber is the construction cost of building the fiber network along each road in a service area. Poles play a major role in the cost equation, but it’s not the fees to rent the poles that are the problem. The biggest cost culprit in putting fiber on poles in something the industry calls make-ready. This is the cost of getting poles ready before fiber can be hung. There are national electrical standards that define the spacing between wires of different utilities – rules that are designed to provide safety to technicians that must work on poles, particularly when trying to fix storm damage.

Make-ready costs fall into three general categories. Some make-ready involves fixing existing problems with wires. The original utilities on the poles may not have followed the safety rules and there are often many cases where wires are already out of compliance with the safety rules. Cables may be installed too close to neighboring wires. Wires might have too much sag, making it hard for an additional attacher.  Unfortunately, the make-ready rules say that the new fiber attacher must pay the full cost fixing existing problems.

The second category of make-ready involves situations where there is not enough room for a new attacher. In these cases, the pole must be replaced with a taller pole and each existing attacher must move wires to the new pole. Unfortunately, the new attacher must also pay for all of these costs.  The final category of cost in areas with a lot of trees is tree trimming. Electric utilities are supposed to keep trees trimmed out of the way of the wires on a pole – but if they are lax in this effort, then the new fiber attacher must also pick up these fees.

It’s not untypical for make-ready costs to range from $10,000 to $20,000 per mile, with some cases we know of as high as $50,000 per mile. The areas with the highest costs are with pole owners (generally electric companies) that have neglected pole maintenance for many years. A new fiber builder is often saddled with replacing poles that are rotted or leaning – something that the utilities should have been routinely fixing over the years. I know of cases where practically every pole needs to be replaced – and this can generally be pinned on the absence of maintenance by the pole owner.

If the GPSC really want to make it easier to build rural fiber, they would have tackled the make-ready issue aggressively. It’s crazy that a new pole attacher must pay to fix existing safety violations of the current utilities using the pole. It’s massively unfair that a new fiber attacher should pay the full cost to replace poles that are old, rotted, and already unsafe.

But fixing the make-ready issue means taking on the powerful lobbies of existing utilities. The telcos, cable companies, and electric utilities collectively have a huge presence in most state legislatures. They are perfectly happy with the status quo where the new guy pays to fix all past sins.

I hope the Georgia idea doesn’t catch on. Regulators and state politicians look for easy ways to say that they are doing something to fix the rural broadband problem. They will point to things like the One Buck Deal to prove they are taking action – when in fact, actions like this one don’t make it any easier to build rural fiber. If regulators want to fix rural pole issues, then they should be fixing the 99% cost problem of pole make-ready instead of the 1% cost issue of pole attachment fees.

Explaining Open RAN

If you read more than an article or two about 5G and cellular technology you’re likely to run across the term Open RAN. You’re likely to get a sense that this is a good thing, but unless you understand cellular networks the term probably means little else. Open RAN is a movement within the cellular industry to design cellular networks using generic equipment modules so that networks can be divorced from proprietary technologies and can be controlled by software. This is akin to what has happened in big data centers where software now controls generic servers.

The first step in creating Open RAN has been to break the network down into specific functions to allow for the development of generic hardware. Today’s cellular networks have two major components – the core network and the radio access network (RAN). The easiest analog of the core network is to think of it the same as a tandem switching center. Cellular carriers have regional hubs where a set of electronics and switching process the traffic from large numbers of cell sites. The RAN is all of the cell sites where the cellular company maintains a tower and radios to communicate with customers.

Open RAN has broken the cell network into three generic modules. The radio unit (RU) is located near to or is incorporated into the antenna and is the electronics that transmits and receives signals from customers. The distributed unit (DU) is the brains at cell sites. The centralized unit (CU) is a more generic set of core hardware that communicates between the core and the distributed units.

The next step in developing Open RAN has been to ‘open’ the protocols and interfaces between the components of the cellular network. The industry has created the O-RAN Alliance that has developed open-source software that controls all aspects of the cellular network. The software has been developed in eleven generic modules that handle the major functions of the cellular network, For example, there is a software model for controlling the front-haul function between the radio unit and the distributed unit, a module for the mid-haul function between the distributed unit and the centralized unit, etc.

While the industry has created generic open-source software, each large carrier will create their own flavor of the software to configure features the way they want them. Today it’s hard to tell the difference between using AT&T versus T-Mobile, but that is likely to change over time as each carrier develops its own flavor of features.

There are some huge benefits to an Open RAN network. The first is savings on hardware. It’s far less expensive to buy generic radios rather than proprietary radio systems from one of the major vendors. In data centers, we’ve seen the cost of generic switches and servers drop hardware costs by as much as 80%.

But the biggest benefit of Open RAN is the ability to control cell sites with a single software system. Today, the task of updating cell sites to a new feature is a mind-boggling task if the upgrade requires any hardware. That requires a technician to visit every cell site in a nationwide network. Even software upgrades are a challenge and often have to be done today on site since there are numerous configurations of cell sites in a network. With Open RAN, features would be fully software-driven and could all be updated at the same time.

The cellular carriers love the concept because Open RAN frees them to develop unique solutions for customers that are software-driven and not limited by proprietary hardware and software. The industry has always talked about developing specialized features for industries like agriculture or hospitals and Open RAN provides the platform to finally do that. Even better, each major hospital chain could have unique features it desires. This leads to an exciting future where customers can help design their own features rather than accept from a menu of industry features.

Interestingly, the Open RAN concept will also carry over into cellphones, where the best cellphones will have generic chips that can be updated to develop new features without having to upgrade phones every few years.

Converting to Open RAN won’t be cheap or easy because it will ultimately mean scrapping most of the electronics and software being used today at every cell site. We’re likely to first see the big carriers breaking in Open RAN by segments, such as using the solution for small cell sites before converting the big tower sites.

One cellular carrier is likely to take the lead in this movement. Dish Networks is in the process of building a nationwide cellular network from scratch and the company has fully embraced Open RAN. This will put pressure on the other carriers to catch up if Dish’s nimble network starts capturing large nationwide customers.

Technology Trends for 2021

The following are the most important current trends that will be affecting the telecom industry in 2021.

Fiber Construction Will Continue Fast and Furious in 2021. Carriers of all shapes and sizes are still building fiber. There is a bidding war going on to get the best construction crews and fiber labor rates are rising in some markets.

The Supply Chain Still has Issues. The huge demand for building new fiber already had already put stress on the supply chain at the beginning of 2020. The pandemic increased the delays as big buyers reacted to the pandemic by re-sourcing some of the supply chain outside of China. By the end of 2021, there is a historically long waiting time to buy fiber for new and smaller buyers as the biggest fiber builders have pre-ordered huge quantities of fiber cable. Going into 2021 the delays for electronics have lessened, but there will be issues with buying fiber for much of 2021. By the end of the year, this ought to return to normal. Any new fiber builder needs to plan ahead and order fiber early.

 Next-Generation PON Prices Dropping. The prices for 10- gigabit PON technologies continue to drop and is now perhaps 15% more expensive than GPON technology which supports speeds up to a symmetrical gigabit. Anybody building a new network needs to consider the next-generation technology, or at least choose equipment that will fit into a future overlay of the faster technology.

Biggest ISPs are Developing Proprietary Technology. In a trend that should worry smaller ISPs, most of the biggest ISPs are developing proprietary technology. The cable companies have always done this through CableLabs, but now companies like Comcast are striking out with their own versions of gear. Verizon is probably leading the pack and has developed proprietary technology for fiber-to-the-curb technology using millimeter wave spectrum as well as proprietary 5G equipment. The large ISPs collectively are pursuing open-source routers, switches, and FTTP electronics that each company will then control with proprietary versions of software. The danger in this trend for smaller ISPs is that a lot of routinely available technology may become hard to find or very expensive when the big ISPs are no longer participating in the market.

Fixed Wireless Gear Improving. The electronics used for rural fixed wireless is improving rapidly as vendors react to the multiple new bands of spectrum approved by the FCC over the last year. The best gear now seamlessly integrates multiple bands of spectrum, and also meets the requirements to notify other carriers when shared spectrum bands are being used.

Big Telcos Walking Away from Copper. AT&T formally announced in October 2020 that it will no longer add new DSL customers. This is likely the first step for the company to phase out copper service altogether. The company has been claiming for years that it loses money on maintaining old technology. Verizon has been even more aggressive and has been phasing out copper service at the local telephone exchange level for the last few years throughout the northeast. DSL budgets will be slashed and DSL techs let go and as bad as DSL is today, it’s going to go downhill fast from here.

Ban on Chinese Electronics. The US ban on Chinese electronics is now in full force. Not only are US carriers forbidden from buying new Chinese electronics, but Congress has approved funding to rip out and replace several billion dollars of currently deployed Chinese electronics. This ostensibly is being done for network security because of fears that Chinese equipment includes a backdoor that can be hacked, but this is also tied up in a variety of trade disputes between the US and China. I’m amazed that we can find $2 billion to replace electronics that likely pose no threat but can’t find money to properly fund broadband.

5G Still Not Here. In 2021 there is still no actual 5G technology being deployed. Instead, what is being marketed today as 5G is really 4G delivered over new bands of spectrum. We are still 3 – 5 years away from seeing any significant deployment of the new features that define 5G. This won’t stop the cellular carriers from crowing about the 5G revolution for another year. But maybe we’ve turned the corner and there will be less than the current twenty 5G ads during a single football game.

Let’s Try Another Approach – Part II

Yesterday’s blog talked about the many problems with the recently concluded RDOF grant process. The FCC has an opportunity to clear up some of these problems through the long-form approval process. But if the FCC awards the grants as auctioned, then the FCC will have completely botched the only two giant-dollar broadband grant programs it has administered – the original $11 billion CAF II and now the $16 billion RDOF. Even if the FCC cleans up some of the worst problems, the agency has shown us that it should have no role in deciding who gets to build broadband networks.

There is a better and easier approach to successfully administering broadband grants sitting openly in front of us. Congress gave block grants to the states for CARES funding that included money for broadband. This money came with some odd strings and rules that kept changing, but from what I can see, the states did a decent job of administering the funds. The biggest hurdle with the CAREs money was that states had to spend the money too quickly and were faced with having to repay any money that was later deemed to not meet the intentions of the CARES Act. But even with the CARES restrictions, each state carefully deliberated and debated the best way to use the money.

The most effective way to award large amounts of grants like the RDOF is through block grants that would allow each state to determine how to use the money. To be effective, block grants shouldn’t be saddled with any stupid rules or restrictions – but I think it’s clear that states know local broadband needs far better than the FCC. For example, one of the big problems with RDOF is the FCC determining the Census blocks where the grants were to be awarded based upon its admittedly crappy broadband mapping. With block grants, states should be free to pick where funds are allocated.

Block grants would have avoided most of the mistakes made by the FCC with RDOF. Consider the state of Minnesota as an example. The state already has a successful grant program that could have been expanded to deal with a large block grant. Minnesota uses its own broadband map and would have ignored the FCC mapping. Minnesota already has a minimum speed requirement for grant recipients of 100 Mbps download and would not have considered slower technologies. Minnesota would likely not have considered satellite broadband as an acceptable solution. In the RDOF, a large portion of the money in Minnesota was awarded to WISP that says it will build fiber. The Minnesota grant program would have likely rejected this company for lack of technical experience and financial wherewithal – but that would be a local decision for the state to make.

Not every state has its act together as well as Minnesota and perhaps the FCC rules would have to create a few guidelines. For example, the current state grants in Washington are only awarded as a combination of a grant and a loan. The federal money should be awarded as pure grants that shouldn’t be complicated or encumbered with state loans.

States certainly wouldn’t be perfect with block grants and some states would make boneheaded decisions. As an example, a few years ago a large state grant program was created in California, and before the money was awarded, AT&T and Frontier lobbyists used political influence to make sure most of the money went to them. However, there is currently huge public pressure to solve broadband gaps, and any politician who bungles state grants would be under a microscope and held accountable. I like that accountability. There is zero accountability when the FCC botches a grant. We never saw any heads roll for the disaster of the CAF II awards when an $11 billion screw-up should have been front-page news. Giving the money to the states bring accountability and transparency that we don’t have at the federal level.

The FCC is a regulatory body and should never have tackled figuring out how to award $16 billion in grants. It was obvious from the start that the FCC had bitten off more than they could chew – but considering the time that the FCC had to get ready for this, I was surprised at the utter failure of the reverse auction process. The FCC could have instead announced state block grants and gotten positive acclaim from coast to coast – that would have been a legacy that Ajit Pai could have been proud of – instead, he’ll be known as the guy who botched the RDOF grants.

Let’s Try Another Approach – Part 1

Anybody reading this blog already knows that I am not a fan of the recent RDOF grant program. If the FCC doesn’t figure out a way in the next few months to cancel the worse of the grant awards, when we look back six years from now we’ll find that half or more of the funding was wasted. The FCC has wasted money before, like with the $11 billion from the CAF II for big telcos that was nearly all wasted – but solving the rural digital divide has become too important to keep throwing away grant money.

The RDOF grant process was doomed before it ever got started. The amount of grant award in each Census block is based upon a massively flawed FCC cost model that pretends to understand the difference in broadband construction costs around the country. This unfortunately means that the FCC offered far too much grant funding in some places, and not enough in others. This is a nuance of the grant that everybody seemed to have missed – that the FCC pre-determined the amount of grant available for each Census block. I know areas where the FCC was offering 20% more than the cost of building a fiber network, and others where it wasn’t offering half of what is needed. The FCC’s faulty cost model does not accurately reflect the cost of getting onto bad poles or of encountering rock when burying fiber. We saw folks offering to build fiber in places where the FCC awards were overly generous, but no landline ISPs offering to build broadband where the awards were too low. People living in the Census blocks where the FCC awards were too small were doomed from the start to not see a decent broadband solution.

The FCC really blew it when it came to vetting the financial wherewithal of applicants. They allowed small companies with limited experience and weak balance sheets to claim huge amounts of funding, with the largest winning more than $1 billion in grant funding. This was not hard to foresee, and companies should have been given bidding limits according to their financial capability. Tackling this after the auction is over is a real mess.

The FCC also didn’t put any common sense stops in place on the bidding. There are a huge number of Census blocks where the grant was finally awarded at less than 5% of what the FCC offered – many as low as 1%. Some of these recipients say they are going to build fiber in areas where the construction cost per passing to build fiber is more than $15,000. Does anybody really believe that a grant recipient will build fiber if they accepted only a few hundred dollars of grant per passing in these high cost places?

Then there were problems due to the FCC not taking the time to do its homework. I’ve seen maps showing grant awards to places like large airports, giant parking lots, malls, and other assorted empty Census blocks. The FCC couldn’t afford to have somebody in the last year spend some time looking at the grant areas on Google Earth? Such areas should never have been in the grant to start with and demonstrate incompetence at the FCC. Hopefully, these grant awards will be canceled.

The RDOF grant also allowed grants to be awarded for technologies that should never have been allowed in the grants. In another giveaway to big telcos there were awards made to enhance DSL – does this FCC really think there is any life left in rural copper? Technology as slow as 25 Mbps was allowed in the auction – all due to the FCC not having the backbone to define broadband at a more reasonable and higher speed. Perhaps the biggest dollar problem was letting bidders claim faster technology speeds than are conceivably possible – such as bidders that claimed the ability to build gigabit fixed wireless. All the WISPs I know are irate about this since they were outbid in the auction by bidders that lied about technical capabilities. Such grant applications must be nixed in the long-form process or the FCC will have allowed fraud into the grant process. The headscratcher that has generated a hundred articles is why the FCC is allowing grants for satellite broadband – a technology that is going to reach all of these places anyway, without the grant money.

Maybe the biggest problem with the RDOF process is that the penalties for cheating are too small. This was the rule that doomed CAF II. The big telcos did the math on CAF II and figured they could do absolutely nothing and still keep a significant percentage of the grant dollars. The penalty for taking RDOF money and doing nothing ought to require repayment of more than 100% of the grant – with a repayment that cannot be hidden behind a bankruptcy.

It’s hard to know which of these problems was the worst, and they all contributed to the disaster we see at the end of the grant. This FCC has loudly complained about fraud and waste in the universal service fund. But it turns out the biggest waster of the funds is the FCC itself. It tossed away $11 billion in the CAF II awards and is on the path to toss away more with the RDOF awards.

The second part of this blog, to be published tomorrow will suggest a better way to handle large FCC grants. We need to find a better way to do this because I can’t see the FCC fixing all of the problems I listed above.

How Secure is Our Telecom Infrastructure?

The recent bombing in Nashville is a reminder that our telecom infrastructure is always at risk from terrorism or major natural disasters. The Nashville bombing is a telecom company’s worst nightmare where a deranged bomber parked a powerful bomb outside of the building with the express intent to wipe out the AT&T communication hub. We’ll have to wait to hear the full details of the damage done, but it seems that AT&T was able to restore most local service within three or four days.

We’ve had other major outages. The biggest came on 911, 2001 when terrorists knocked down the World Trade Center towers. This had a secondary impact of damaging the major Internet hub located across the street from the towers. This was a major Verizon tandem office along with being a CLEC hotel and a switching point for the Internet. The collapsing towers not only damaged some of the electronics at the site, but the continuing power outages eventually resulted in overheated equipment and continuing failures.

The third big disaster I recall was the Howard Street Tunnel fire in Baltimore in 2001. A rail crash inside the tunnel resulted in an intense fire that melted the fiber optic cables that delivered Internet traffic between Washington DC and the northeast corridor.

In addition to these major news-event outages, I’ve seen numerous smaller outages caused by hurricanes, floods, and tornadoes where telecom buildings and huts were largely obliterated. The most unusual outage I recall was when brazen thieves stole several miles of large copper wiring off the poles near Sugar Land, Texas.

Anybody who works in a telecom network understands how fragile the network is, at least locally. We do our best to hide electronics inside buildings or behind fences. We’re careful not to create maps showing the locations of key switching and fiber connection sites. But the fact is that a determined person that understands a network can do a huge amount of damage in a single night in most cities. They wouldn’t need a camper full of explosives to cause major damage.

We’ve come a long way since 2001 in planning ahead of time for major disasters. From what I’ve read about Nashville, AT&T brought in a few dozen temporary cell sites to restore cellular coverage quickly. In 2001 I recall that Verizon was proud about delivering a portable switch inside of a trailer – but it took weeks to restore phone service to the cables that hadn’t been damaged.

We’ve also become adroit at quickly switching traffic around damaged facilities. The 2001 tunnel fire destroyed fibers for which there was no alternate routing. Today, most carriers have multiple routing options and the ability to electronically divert traffic away from outages. We now have companies like Cloudflare and ThousandEyes which can react instantly to network problems and reroute traffic as needed. We had nothing like this in 2001.

But the Nashville bombing reminds us that we can’t forget about security when designing networks. I know of fiber networks where large OLT huts are sitting unprotected and open to the public – the network owner is largely counting on the fact that nobody knows what the hut is for. To save money and speed up construction we’ve changed from using concrete block buildings with secure doors to smaller and more fragile metal cabinets.

The damages in Nashville ought to be a reminder to network owners to review the physical safety of their network. Little steps like physical barriers like fences and hedges can make a difference. In today’s world, there is no reason not to have security systems with cameras and motion detectors that can notify law enforcement when somebody is visiting a hut in the middle of the night. We don’t need to be paranoid about security – we have hundreds of thousands of telecom sites that are safe and undisturbed day after day. But the Nashville bombing is a reminder that somebody with a grudge or a nutty idea can cause a lot of damage to our networks, which are a lot more fragile than we want to admit.