A Fiber Land Grab?

I was surprised to see AT&T announce a public-private partnership with Vanderburgh County, Indiana to build fiber to 20,000 rural locations. The public announcement of the partnership says that the County will provide a $9.9 million grant, and AT&T will pick up the remaining $29.7 million investment.

The primary reason this surprised me is that it is a major reversal in direction for AT&T. The company spent the last thirty years working its way out of rural America, capped by an announcement in October 2020 that the company will no longer connect DSL customers. AT&T has publicly complained for years about the high cost of serving rural locations and has steadily cut its costs in rural America by slashing business offices and rural technicians. It’s almost shocking to see the company dive back in as a last-mile ISP in a situation that means long truck rolls and higher operating costs.

I’m sure it was the County grant that made AT&T consider this, but even that is surprising since the County is only contributing 25% of the funding. I’ve created hundreds of rural business plans, and most rural builds need grants of 40% or even much more to make financial sense. I assume that there is something unique about this county that makes that math work. AT&T and other telcos have one major advantage for building fiber that might have come into play – they can overlash fiber onto existing copper wires at a fraction of the cost of any other fiber builder, so perhaps AT&T’s real costs won’t be $29.7 million. Obviously, the math works for AT&T, and another county will be getting a rural fiber solution.

AT&T is not alone in chasing rural funding. We saw Charter make a major rural play in last year’s RDOF reverse auction. The RDOF reverse auction also attracted Frontier and Windstream, and both of these companies have made it clear that pursuing fiber expansion opportunities and pursuing grants are a key part of their future strategic plan.

My instincts are telling me that we are about to see a fiber land grab. The big ISPs other than Verizon had shunned building fiber for decades. When Verizon built its FiOS network, every other big ISP said they thought fiber was a bad strategic mistake by Verizon. But we’ve finally reached the time when the whole country wants fiber.

This AT&T announcement foreshadows that grant funding might be a big component of a big ISP land grab. The big ISPs have never been shy about taking huge federal funding. I wouldn’t be surprised if the big ISPs are collectively planning in board rooms on grabbing a majority of any big federal broadband grant funding program.

I think there is another factor that has awoken the big ISPs, which is also related to a land grab. Consider Charter. If they look out a decade or two into the future, they can see that rural fiber will surround their current footprint if they do nothing. All big ISPs are under tremendous pressure from Wall Street to keep growing. Charter has thrived for the last decade with a simple business plan of taking DSL customers from the telcos. It doesn’t require an Ouiji board to foresee the time in a few years when there won’t be any more DSL customers to capture.

I’m betting that part of Charter’s thinking for getting into the RDOF auction was the need to grab more geographic markets before somebody else does. Federal grant money makes this a lot easier to do, but without geographic expansion, Charter will eventually be landlocked and will eventually stop growing at a rate that will satisfy Wall Street.

Charter must also be worried about the growing momentum to build fiber in cities. I think Charter is grabbing rural markets where it can have a guaranteed monopoly for the coming decades to hedge against losing urban customers to competition from fiber and from wireless ISPs like Starry.

My guess is that the AT&T announcement is just the tip of the iceberg. If Congress releases $42 billion in broadband grants, the big companies are all going to have their hands out to get a big piece of the money. And that is going to transform the rural landscape in a way that I would never have imagined. I would have taken a bet from anybody, even a few years ago, that AT&T would never build rural fiber – and it looks like I was wrong.

Zayo Installs 800 Gbps Fiber

Zayo announced the installation of an 800 Gbps fiber route between New York and New Jersey. This is a big deal for a number of reasons. In my blog, I regularly talk about how home and business bandwidth has continued to grow and is doubling roughly every three years. It’s easy to forget that the traffic on the Internet backbone is experiencing the same growth. The routes between major cities like Washington DC and New York City are carrying 8-10 times more traffic than a decade ago.

Ten years ago, we were already facing a backhaul crisis on some of the busiest fiber routes in the country. The fact that some routes continue to function is a testament to smart network engineers and technology upgrades like the one announced by Zayo.

There is not a lot of new fiber construction along major routes in places like the northeast since such construction is expensive. Over the last few years, a major new fiber route was installed along the Pennsylvania Turnpike as that road was rebuilt – but such major fiber construction efforts are somewhat rare. That means that we must somehow handle the growth of intercity traffic with existing fiber routes that are already fully subscribed.

You might think that we could increase fiber capacity along major fiber routes by upgrading the bandwidth capacity, as Zayo is doing on this one route. But that is not a realistic option in most cases. Backhaul fiber routes can best be described as a hodge-podge. Let’s suppose as an example that Verizon owns a fiber route between New York City and Washington DC. The company would use some of the fibers on that route for its own cellular and FiOS traffic. But over the years, Verizon will have leased lit or dark fibers to other carriers. It wouldn’t be surprising on a major intercity route to find dozens of such leased arrangements. Each one of those long-term arrangements comes with different contractual requirements. Lit routes might be at specific bandwidths. Verizon would have no way of knowing what those leasing dark fiber are carrying.

Trying to somehow upgrade a major fiber route is a huge puzzle, largely confounded by the existing contractual arrangements. Many of the customers using lit fiber will have a five 9’s guarantee of uptime (99.999%), so it’s incredibly challenging to take such a customer out of service, even for a short time, as part of migrating to a different fiber or a different set of electronics.

Some of the carriers on the major transport routes sell transport to smaller entities. This would be carriers like Zayo, Level 3, and XO (which is owned by Verizon). These wholesale carriers are where smaller carriers go to find transport on these existing busy routes. That’s why it’s a big deal when Zayo and similar carriers increase capacity.

I wrote about the first 400 Gbps fiber path in March 2020, implemented by AT&T between Dallas and Atlanta. Numerous carriers have started the upgrade to 400 Gbps transport, including Zayo, which has plans to have that capacity on 21 major routes by the end of 2022. The 800 Gbps route is unique in that Zayo is able to combine two 400-Gbps fiber signals into one fiber path using electronics from Ciena. Verizon had a trial of 800 Gbps last year using equipment from Infinera.

In most cases, the upgrades to 400 Gbps or 800 Gbps will replace routes lit at the older standard 100 Gbps transport. While that sounds like a big increase in capacity, in a world where network capacity is doubling every three years, these upgrades are not a whole lot more than band-aids.

At some point, we’re going to need a major upgrade to intercity transport routes. Interestingly, all of the federal grant funding floating around is aimed at rural last-mile fiber – an obviously important need. Many federal funding sources can’t be used to build or upgrade middle-mile. But at some point, somebody is going to have to make the needed investments. It does no good to upgrade last-mile capacity if the routes between towns and the Internet can’t handle the broadband demand. This is probably not a role for the federal government because the big carriers make a lot of money on long-haul transport. At some point, the biggest carriers need to get into a room and agree to open up the purses – for the benefit of them all.

Forecasting Interest Rates and Inflation

This is a topic that I haven’t written about since I started my blog seven years ago because there hasn’t been a reason. We have just gone through a decade that benefitted from both low interest rates and low inflation – a rarity in historical economic terms.

Anybody building a broadband network can tell you they are seeing significant inflation in the prices of components needed to build a fiber network. There are some who shrug off current inflation as a temporary result of supply chain issues. To a large degree, they are right, but the inflation is real nonetheless. As someone who worked in the industry in past times of inflation, my experience is that prices never go back down to former levels. Even if all of the factors leading to current inflation are eventually solved, it’s unlikely that the companies that make conduits and handholes will ever go completely back to the old prices.

To some degree, the lack of inflation has spoiled us. As recently as a year ago, I knew that I could pull a business plan off the shelf from ten years ago, and it probably still made sense. All of the industry fundamentals from a decade ago were all roughly the same, and a business plan that worked then would still have worked.

I hate to say it, but those days of surety might be over for a while. The chart below is all-too-familiar to those of us who have been in the industry a long time. In the not-too-distant past, we saw periods of both high interest rates and high inflation. 1980 is not ancient history, and those of us who were in the industry at the time recall the jarring effect of both high interest rates and high inflation on telephone companies. This chart doesn’t go back to even worse times, like in 1970 when President Nixon ordered a nationwide freeze on wages and prices to try to stop hyperinflation. I remember seeing a talking head economist on a business show a few years ago who said that we now know how to beat inflation and that high inflation and high interest rates were never coming back to the U.S. economy. I had a good laugh because I knew this guy was a total idiot.

We now live in a global economy, and the U.S. doesn’t have any magic pill that somehow keeps us out of worldwide economic upheaval. As one example, West Africa is currently suffering from hyperinflation. The current inflation rate in Nigeria is 16%, down from over 20%. Nearby Congo is one of the primary sources for metals like cobalt and tantalum that are essential for making things like computer chips and cellphones. When the price of raw material from Congo skyrockets, the industries that use those resources have no choice but to raise prices to compensate.

We don’t have to go back to ancient history to remember when we worried about interest rates. I worked with cities that were floating municipal bonds in the 2000s, and I recall times when they delayed selling bonds hoping that rates would be more favorable in the weeks or months to follow. One fiber project I was working with never was launched because the cost of interest on bonds grew larger than the project could support.

Everybody who builds financial forecasts for broadband businesses is in a quandary. How do we reflect the rising costs for materials and labor? How can anybody forecast the cost to build fiber two years from now or three or five years? We look out over the next ten years and see an industry that wants to grow faster than the support structure for the industry is ready to handle. Companies like Corning have difficult decisions to make. The company could likely sell twice as much fiber as in recent years if it had more factories. But does it dare build those factories? A factory is a fifty-year investment, and does the company want to have huge idle capacity a decade from now when the fiber craze naturally slows down? Every manufacturer in the industry is having a similar conversation, but nobody knows the calculus for figuring out the right answer. And that calculus will get much harder if we see the return of both inflation and higher interest rates.

Interest rates are going to have to increase at some point. The rates have been held below the natural market as a monetary strategy to fuel the economy. But the Federal Reserve signaled a few weeks ago that it foresees six to seven interest rate increases over the next two years.

I don’t mean for this blog to be gloom and doom. For most of my career, I’ve dealt with both inflation and interest rates when making financial forecasts. The last decade spoiled me like it spoiled many of us, and we need to readjust the way we think about the future and figure out how to deal with an economic world that is returning to normal.

Is Defining Broadband by Speed a Good Policy?

I’ve lately been looking at broadband policies that have shaped broadband, and I don’t think there has been any more disastrous FCC policy than the one that defines broadband by speed. This one policy has led to a misallocation of funding and getting broadband to communities that need it.

The FCC established the definition of broadband as 25/3 Mbps in 2015, and before then, the definition of broadband was 4/1 Mbps, set a decade earlier. The FCC defines broadband to meet a legal requirement established by Congress and codified in Section 706 of the FCC governing rules. The FCC must annually evaluate broadband availability in the country – and the agency must act if adequate broadband is not being deployed in a timely manner. The FCC chose broadband speed as the way to measure its success, and that decision has become embedded in policies both inside the FCC and elsewhere.

There are so many reasons why setting an arbitrary speed as the definition of broadband is a poor policy. One major reason is that if a regulatory agency is going to use a measurement index to define a key industry parameter, that numerical value should regularly be examined on a neutral basis and updated as needed. It’s ludicrous not to have updated the speed definition since 2015.

Cisco has reported for years that the demand for faster speeds has been growing at a rate of about 21% per year. Let’s assume that the 25/3 definition of broadband was adequate in 2015 – I remember at the time that I thought it was a fair definition. How could the FCC not have updated such a key metric since then? If you accept 25 Mbps download as an adequate definition of broadband in 2015, then applying the expected growth in demand for speed by 21% annually produces the following results.

Download Speeds in Megabits / Second

2015 2016 2017 2018 2019 2020 2021
25 30 37 44 54 65 79

This is obviously a simplified way to look at broadband speeds, but a definition of the minimum speed to define broadband at 79 Mbps feels a lot more realistic today than 25 Mbps. Before arguing about whether than is a good number, consider the impact of extending this chart a few more years. This would put the definition of broadband in 2022 at 96 Mbps and at 116 Mbps in 2023. Those higher speeds not only feel adequate – but they feel just. 80% of the homes in the country already have access to cable company broadband where a speed of at least 100 Mbps is available. Shouldn’t the definition of broadband reflect the reality of the marketplace?

We know why the FCC stuck with the old definition – no FCC wanted to redefine broadband in a way that would define millions of homes as not having broadband. But in a country where 80% of households can buy 100 Mbps or faster, it’s impossible for me to think this one fact doesn’t mean that 100 Mbps must be the bare minimum definition of broadband.

There have been negative consequences of this definition-based policy. One of the big problems is that the 25/3 Mbps speed is slow enough that DSL and fixed wireless providers can claim to be delivering broadband even if they are delivering something less. Most of the FCC mapping woes come from sticking with the definition of 25/3 Mbps. If the definition of broadband today was 100 Mbps, then DSL providers would not be able to stretch the truth, and we would not have misallocated grant funding in recent years. Stubbornly sticking with the 25/3 definition is what saw us giving federal broadband grants to companies like Viasat.

As long as we are going to define broadband using speeds, then we’ll continue to have political fights over the definition of broadband. Congress recently ran headlong into this same issue. The original draft of the Senate bill had proposed a definition of broadband as 100/100 Mbps. An upload speed set at that level would have prohibited broadband grants for cable companies, WISPs, and Starlink. Sure enough, by the time that lobbyists made their calls, the definition of upload speed was lowered to 20 Mbps in the final legislation. Congress clearly gave in to political pressure – but that’s the line of business they are in. But we’ve had an FCC unwilling to be honest about broadband speeds for political reasons – and that is totally unacceptable.

Fixing the Supply Chain

Almost everybody in the broadband industry is now aware that the industry is suffering supply chain issues. ISPs are having problems obtaining many of the components needed to build a fiber network in a timely manner, which is causing havoc with fiber construction projects. I’ve been doing a lot of investigation into supply chain issues, and it turns out the supply chain is a lot more complex than I ever suspected, which means it’s not going to be easy to get the supply chain back to normal.

One of the supply chain issues that is causing problems throughout the economy is the semi-conductor chip shortage. Looking at just this one issue demonstrates the complexity of the supply chain. A similar story can be told about other supply chain issues like fiber and conduit. Consider all of the following issues that have accumulated to negatively impact the chip supply chain:

  • Intel Stumbled. Leading into the pandemic, Intel stumbled in its transition from 10-nanometer chips to 7-nanometer chips. This created delays in manufacturing that led many customers to look to other manufacturers like AMD. Changing chip manufacturers is not a simple process since a chip manufacturer must create a template for any custom chip – a process that normally takes 4 – 6 months. Chip customers found themselves caught in the middle of this transition as the pandemic hit.
  • Demand for Specific Chips Changed. Chipmakers tend to specialize in specific types of chips, and they shift gears in anticipation of market demand. Before the pandemic, the makers of memory DRAM and NAND chip had curbed production due to declining sales in smartphones and PCs. When the pandemic caused a spike in demand for those devices, the chip makers had already changed to producing other kinds of chips.
  • Labor Issues. Chipmakers were like every other industry with shutdowns due to COVID outbreaks. And like everybody else, the chipmakers had labor shortages due to workers who were unable or unwilling to work during the pandemic.
  • Local Issues. Every industry suffers from temporary local issues, but these issues were far more disruptive than normal during the pandemic. For example, an extended power outage crippled Taiwan’s TSMC. A fire knocked out a factory of auto chipmaker Renesas.
  • A Spike in Demand. One of the consequences of the pandemic has been a huge transition to cloud services. This caused an unexpected spike in chips needed for data centers. Rental car companies maintained revenue during the pandemic by selling rental car stocks – the crunch to replace those rental cars is creating more temporary demand than the industry can supply.
  • Trade War. The ongoing trade issues between the U.S. and China have caused slowdowns in Chinese manufacturing. One estimate I saw said that as many as 40% of Chinese factories were shut during the peak of the pandemic.
  • There is a global shipping logjam. Getting shipped items through ports is taking as long as six weeks due mostly to labor shortages of port workers, ship crews, and truckers. This doesn’t affect just the final chips being shipped but also the raw materials used to make or assemble chips.
  • Raw Material Shortages. The world has tended to lean on single markets for raw materials like lithium, cobalt, nickel, manganese, and rare earth metals. The Brookings Institute says that pandemic has caused delays and shortages of thirteen critical metals and minerals.
  • Selective Fulfillment. Overseas chipmakers like Netherlands’ ASML, Taiwan’s TSMC, and Korea’s Samsung chose to satisfy domestic chip and regional chip demand before global demand in places like the U.S.
  • Receive-as-Needed Logistics. Over the last decade, many manufacturers have changed to a sophisticated manufacturing process that has materials and parts appearing at the factory only as needed. I recall manufacturers that bragged about having components delivered only an hour before use on the factory floor. Anybody using this logistics method has been stopped dead during the pandemic, and many companies are reexamining logistics strategies.

I suspect this list is just touching the tip of the iceberg and that there are probably a dozen more reasons why chips are in short supply. Unfortunately, every major industry has a similar list. It’s not going to be easy for the world to work our way out of all of this because the problems in any one industry tend to impact many others.  I’ve read opinions of optimists who believe we’ll figure all this out in 2022, but others who say some of these issues are going to nag us for years to come.

Do We Still Need the Universal Service Fund?

There is currently a policy debate circulating asking who should pay to fund the FCC’s Universal Service Fund. For decades the USF has collected fees from telephone carriers providing landline and cellular phones – and these fees have been passed on to consumers. As landline telephone usage has continued to fall, the fees charged to customers have increased. There have been calls for years to fix the USF funding mechanism by spreading the fees more widely.

Since the fund today is mostly being used to support broadband, the most logical way to expand funding is by collecting the fee from ISPs – which would also likely pass the fees on to consumers. A new idea has surfaced that suggests that the USF should instead be funded by the biggest users of the Internet – being Netflix, Google, Facebook, and others. This argument was likely started by the big ISPs who wanted to deflect the fee obligations elsewhere. The argument is that the big web companies get tremendous benefits from the Internet without paying towards the basic infrastructure.

As I’ve read this back-and-forth debate, I was struck by a different thought. Instead of expanding funding for the USF, we ought to be talking about curtailing it. The Universal Service Fund is used for several purposes. USF funds the subsidies to get cheaper broadband for schools and libraries. The fund also pays for getting better broadband for rural health care facilities. These seem like worthwhile programs that should continue to be funded.

But the USF also has been supporting the Lifeline program that gives a $9.25 monthly discount to qualifying low-income homes. The amount of that monthly subsidy hasn’t been changed in years and has become more irrelevant over time. Some of the big ISPs have completely dropped out of the program, such as AT&T that ditched participation in most states where it is still a telco. There were always rumors that the fund included a lot of fraud – but we never saw enough detail to ever understand if this was true.

It seems like the current White House and Congress have a better alternative to Lifeline. The ARPA bill created the Emergency Broadband Benefit that gives low-income homes a $50 discount on broadband during the pandemic. Congress has suggested replacing that with a more permanent $30 discount. If Congress gets its act together and passes the infrastructure bill, then it’s time to have a serious talk about eliminating the FCC’s Lifeline program. There is no need to have both programs.

The final use of the Universal Service Fund is what I often refer to as the FCC’s slush fund. The FCC lets this fund accumulate and supposedly uses it to improve broadband in the country. But frankly, the FCC is terrible at this. Consider the history of this piece of the USF:

  • This money was originally intended to support rural telephone companies. State regulators capped telephone rates in most states in the range of $15 – $20 per month, and that was not enough revenue to support the telephone networks in high-cost areas. The Congress and the FCC had decided many years ago that the U.S. economy was best served if everybody was connected to the telephone network, and this might have been the biggest boon to rural America after electrification. This was an effective policy, and at one point, we had a 99% telephone penetration rate in the country. This fund was needed, but had big flaws. The FCC handed out the money based on formulas instead of looking at the need of individual telcos. This resulted in some telcos and commercial telco owners getting incredibly rich from an over-generous subsidy. There was never any serious attempt at the FCC to get this right.
  • But as landline telephone service has been transplanted by cellular service and VoIP, the FCC transitioned this fund to subsidize rural broadband. Perhaps the best use of the funding was the ACAM program that gave money to rural telcos, many of which leveraged the money and took big loans to build rural fiber. When people marvel at the amount of rural fiber in the Dakotas – it was funded by the ACAM program. But this plan also had some faults when some telcos used the ACAM to upgrade DSL and pocketed much of the subsidy.
  • After this, the FCC used the slush fund for a series of disastrous funding plans. The first was CAF II, where the FCC gave $11 billion to the largest telcos to upgrade rural DSL to 10/1 Mbps. This funding was given at a time when 10/1 Mbps was already too slow. A few telcos used the money properly but made little dent in improving broadband since tweaking out-of-date rural DSL didn’t make broadband much better. I’m not alone in thinking that some of the big telcos pocketed much of this money. They made a few cosmetic upgrades but largely used the money straight to bottom line profits. The FCC was so aghast at the way this funding was wasted that it tacked on an extra $2 billion payment to the telcos after the end of the program.
  • Next, the FCC held a small reverse auction with some money left over from CAF II. Some of this money went to worthwhile fiber projects, but money also went to ISPs like Viasat – a mind-numbing use of federal subsidies.
  • Next came the RDOF reverse auctions. I think we’ll look back a decade from now and judge that this funding did far more harm than good. If you follow my blog, you know I believe that the FCC mucked up this program in half a dozen ways, each of which will have long-term consequences in the neighborhoods where the FCC got it wrong.
  • Finally, the FCC tried to fund a $6 billion 5G fund that would have handed subsidies to cellular carriers to extend cell coverage into areas where it’s needed. But there was so much deception in the reporting of rural cellular speeds that the FCC finally pulled the plug on this – although I think this idea is likely to roar back to life one of these days.

The bottom line is that the FCC is incredibly inept in administering the slush fund. I don’t know why anybody would think that a regulatory agency made up mostly of industry lawyers could be the best place to entrust billions of dollars of broadband funding. It’s hard to imagine that the FCC could have done any worse over the last decade with this slush fund. I’m pretty sure that any six readers of this blog could have chatted over beers and come up with better ways to use the money.

So rather than have the debate of whether AT&T or Facebook should fund the Universal Service Fund – why don’t we have the debate about largely eliminating the fund? I can’t think of any reason why we should continue to let the FCC gum up rural subsidy programs. Let’s find a way to fund the school, libraries, and rural health care, and let’s get the FCC out of the business of goofing up subsidies.

An Update on Robocalling

The FCC has taken a number of actions against robocalling over the last year to try to tamp down on the practice, which every one of us hates. I’ve had the same cellular phone number for twenty-five years, and I attract far more junk calls every day than legitimate business calls.

The FCC has taken a number of specific actions, but so far this hasn’t made a big dent in the overall call volume. Actions taken so far include:

  • The FCC issued cease-and-desist letters to some of the biggest robocallers. For example, in May of this year, the agency ordered VaultTel to stop placing robocalls.
  • The FCC has been fining telemarketers with some of the biggest fines ever issued by the agency. This includes a $225 million fine against a Texas-based health insurance telemarketer for making over one billion spoofed calls. There have been other fines such as $120 million against a Florida time-share company and $82 million against a North Carolina health insurance company.
  • The FCC is hoping that its program for caller ID verification will tamp down significantly on the robocalls. This process, referred to as STIR/SHAKEN requires that underlying carriers verify that a call is originating from an authorized customer. The new protocol has already been implemented by the big carriers like AT&T, but smaller carriers were given more time. The FCC noted recently that it has seen a big shift of robocalling originating from smaller carriers that are not yet part of STIR/SHAKEN.
  • The agency has begun to coordinate efforts with law enforcement to track down and arrest robocallers who continue to flout the rules. That includes working with the U.S. Justice Department and State Attorney Generals.
  • The FCC also gave telephone companies permission to ‘aggressively block’ suspected robocalls. The agency has also encouraged telephone companies to offer advanced blocking tools to customers.

So far, the FCC actions haven’t made a big dent in robocalling. In 2020 we saw about 4 billion robocalls per day. The robocallers picked up the pace of calling in anticipation of getting shut down, and in March of this year, there were over 4.9 million robocalls placed. In the most recently completed month of August, we still saw 4.1 billion robocalls. It appears that the robocallers have just shifted their methods and are able, at this point, to avoid the STIR/SHAKEN restrictions from the big carriers. Hopefully, a lot of this will get fixed when that protocol is mandatory for everybody. The FCC recently announced that it was accelerating the implementation date for a list of carriers that the agency says is originating a lot of the robocalls.

The FCC knew from the start that this wasn’t going to be easy. The process of generating robocalls is now highly mechanized, and a few companies can generate a huge volume of calls. Apparently, the profits from doing this are lucrative enough for robocallers to flirt with the big FCC fines. When I searched Google for the keywords of robocaller and the FCC, the first thing at the top of the list was a company that is still selling robocalling.

We saw the same thing a few years ago with access stimulation, where a few unscrupulous companies and carriers were making big dollars from generating huge volumes of bogus calls in order to bill access charges.

Hopefully, the FCC can eventually put a big dent in robocalling. It’s hard to imagine that anybody is willing to answer a phone call from somebody they don’t know. Hopefully, more giant fines a few major convictions will convince the robocalling companies that it’s not worth it.

Improvements in Undersea Fiber

We often forget that a lot of things we do on the web rely on broadband traffic that passes through undersea cables. Any web traffic from overseas gets to the US through one of the many underwater fiber routes. Like with all fiber technologies, the engineers and vendors have regularly been making improvements.

The technology involved in undersea cables is quite different than what is used for terrestrial fibers. A  long fiber route includes repeater sites where the light signal is refreshed. Without repeaters, the average fiber light signal will die within about sixty miles. Our landline networks rely on powered repeater sites. For major cross-country fiber routes, multiple carriers often share the repeater sites.

But an undersea cable has to include the electric power and the repeater sites with the fiber since the cable may be laid as deep as 8,000 beneath the surface. HMN Tech recently announced a big improvement in undersea electronics technology. On a new underseas route between Hainan, China and Hong Kong, the company has been able to deploy 16 fibers with repeaters. This is a huge improvement over past technologies that have limited the number of fibers to eight or twelve. With 16 lit fibers, HMN will be able to pass data on this new route at 300 terabits per second.

Undersea fibers have a rough existence. There is a fiber cut somewhere in the world on underseas fiber every three days. There is a fleet of ships that travel the world fixing underseas fiber cuts or bends. Most underseas fiber problems come from the fiber rubbing against rocks on the seabed. But fibers are sometimes cut by ship anchors, and even occasionally by sharks that seem to like to chew on the fiber – sounds just like squirrels.

Undersea fibers aren’t large. Near to the shore, the fibers are about the width of a soda can, with most of the fiber made up of tough shielding to protect against dangers that come from the shallow waters near to shore. To the extent possible, an undersea fiber will be buried near shore. Further out to sea, the size of the fibers is much smaller, about the size of a pencil – there is no need to try to protect fibers that are deep on the ocean floor.

With the explosion in worldwide data usage, it’s vital that the cables can carry as much data as possible. The builders of the undersea routes only count on a given fiber lasting about ten years. The fiber will last longer, but the embedded electronics are usually too slow after a decade to justify continued use of the cable. Upgrading to faster technologies could mean a longer life for the undersea routes, which would be a huge economic benefit.

Technology Neutrality

Christopher Ali, a professor at the University of Virginia, says in his upcoming book Farm Fresh Broadband that technology neutrality is one of the biggest policy failures of our time. I completely agree, and today’s blog explores the concept and the consequences.

Over the last decade, every time that a pot of grant money has appeared on the horizon, we’ve heard talk at the FCC about making sure that there is technology neutrality when choosing the winners and losers of federal grants. This phrase had to be invented by one of the big ISPs because as is often typical of DC politics, the meaning of technology neutrality means exactly the opposite of what you might think it means.

Technology neutrality is a code word for allowing slower technologies to be funded from grants. The first time I remember hearing the phrase was in 2018, during the lead-up to the CAF II reverse auction. This was a $2 billion reverse auction for locations that hadn’t been claimed in the original FCC CAF II program. Many in the industry thought that federal grant funds ought to only be used to support forward-looking technologies. The term technology neutrality was used to support the argument that all ISPs and technologies should be eligible for grant funding. It was argued (mostly by ISPs that use slower technologies) that the FCC should not be in the game of picking winners and losers.

The technology neutrality proponents won the argument, and the FCC allowed technologies with capabilities as slow as 25/3 Mbps into the reverse auction. The results were what might be expected. Since lower-speed technologies tend to also be the least expensive to build, the slower technologies were able to win in a reverse auction format. It was not surprising at the end of that auction to see that three of the four top winners will collect $580 million to deploy slower technologies. This included fixed wireless providers AMG Technology (Nextlink) and WISPER, as well as high-orbit satellite provider Viasat.

The same argument arose again as the rules were being developed for the RDOF reverse auction. The first auction offered $14 billion in subsidies for ISPs to build last-mile broadband in places that the FCC thought had no broadband with speeds of at least 25/3 Mbps. The FCC heard testimony from the industry about the technologies that should be eligible for the subsidies. In the end, in the name of technology neutrality, the FCC allowed every technology into the reverse auction. The following is a quote from the FCC order that authorized the RDOF funding:

Although we have a preference for higher speeds, we recognize that some sparsely populated areas of the country are extremely costly to serve and providers offering only 25/3 Mbps may be the only viable alternative in the near term. Accordingly, we decline to raise the required speeds in the Minimum tier and we are not persuaded that bidders proposing 25/3 Mbps should be required to build out more quickly or have their support term reduced by half.

Again, it was not surprising to see that the list of RDOF winners included companies that will use the funding to build slower technologies, including fixed wireless and DSL. Only two of the top winners promised to build gigabit-capable broadband everywhere (a consortium of electric cooperatives and Charter). The FCC also decided at the last minute to allow Starlink into the auction – even those nobody knew at that time the speeds that could be delivered. The FCC really goofed up the technology issue by allowing some WISPs to bid and grab major winnings in the auction by promising to be able to deliver gigabit speeds with fixed wireless technology – a technology that doesn’t exist for a rural setting.

We recently saw the technology neutrality issue rear its head again in a big way. As the Senate was crafting legislation for a major infrastructure program, the original draft language included a requirement that any technologies built with the money should be able to immediately deliver speeds of 100/100 Mbps. That requirement would have locked out fixed wireless and cable companies from the funding – and likely also satellite companies. In backroom wrangling (meaning pressure from the big ISPs), the final legislation lowered that threshold to 100/20 Mbps.

The reason that Ali says that this is a policy failure is that the broadband policymakers are refusing to acknowledge the well-known fact that the need for broadband speeds continues to increase year after year. We just went through a miserable pandemic year where millions of homes struggled with inadequate upload broadband speeds, and yet the technology neutrality canard was rolled out yet again to justify building technologies that will be inadequate almost as soon as they are built. I would argue that the FCC has an obligation to choose technology winners and losers and shouldn’t waste federal broadband money on technologies that have no long-term legs. The decision by regulators and legislators to allow grant funding for slower technology means that the speed that current ISPs can deliver is being given priority over the speed people need.

The Pandemic and the Internet

Pew Research Center conducted several polls asking people about the importance of the Internet during the pandemic. The Pew survey report is seven pages filled with interesting statistics and a recommended read. This blog covers a few of the highlights.

The Overall Impact of the Internet. 58% of adults said that the Internet was essential during the pandemic – that’s up from 52% in April of 2020. Another 33% of adults say the Internet was important but not essential. Only 9% of adults said the Internet wasn’t important to them. The importance of the Internet varied by race, age, level of education, income, and location.

  • As might be expected, 71% of those under 30 found the Internet to be essential compared to 38% of those over 65.
  • 71% of those with a college degree found the internet to be essential versus 45% of those with a high school degree or less.
  • 66% of those in the upper third of incomes found the Internet to be essential compared to 55% of those in the lower third.
  • 61% of both urban and suburban residents found the Internet to be essential compared to 48% for rural residents.

Video Calling Usage Exploded. Possibly the biggest overall change in Internet usage has been the widespread adoption of video calling. 49% of adults made a video call at least once per week, with 12% doing so several times per day. The usage was most pronounced for those who work from home, with 79% making a video call at least once per week and 35% connecting multiple times per day.

Longing for a Return to Personal Interactions. Only 17% of Americans say that digital interactions have been as good as in-person contacts, while 68% say digital interactions are useful but no replacement for in-person contacts.

Challenges with Online Schooling. Only 18% of households said that online schooling went very well, with 45% saying it went somewhat well. 28% of households reported it was very easy to use the technology associated with online schooling, with another 42% saying it was somewhat easy. Twice as many people from the lower one-third of incomes said online schooling technology was difficult than those in the upper one-third of incomes. Nearly twice as many people in rural areas found online schooling technology to be a challenge compared to suburban residents.

Problems with Internet Connections. 49% of all survey respondents said they had problems with the internet connection during the pandemic. 12% experienced problems often.

Upgrading Internet. 29% of survey respondents said they did something to improve their Internet connection during the pandemic.

Affordability. 26% of respondents said they are worried about the ability to pay home broadband bills. This was 46% among those in the lower one-third of incomes.

Tech Readiness. 30% of Americans say they are not confident using computers, smartphones, or other connected electronics. This was highest for those over 75 (68%), those with a high school degree or less (42%), and those in the lower one-third of incomes (38%).