My 2022 Predictions

It’s that time of the year for me to get out the crystal ball and peer into 2022.

The FCC Will Tackle Broadband Regulation and Net Neutrality. I have no idea why it took a year for the administration to tee up a new Chairman and recommend a fifth FCC Commissioner. But once a new Commissioner is seated, the new FCC will tackle reinstating some version of Title II regulation, accompanied by net neutrality regulations. For yet another year, this won’t come from Congress, which is the only permanent solution.

The Rural Wireless vs. Fiber Debate Will Heat Up Again. Vendors are starting to make a lot of noise that layering on 6 GHz spectrum will revolutionize rural fixed wireless broadband. Everything I’m hearing says that the spectrum can bring fast broadband for a few miles from towers, but the outdoor operating characteristics of 6 GHz are sketchy. It won’t stop the wireless industry from declaring yet again that gigabit rural wireless is here.

Many States Will be Overwhelmed by BEAD Grant Process. The $42.5 billion BEAD grants are going to be awarded through the states on the tail of large amounts of State ARPA funding. About half of the states already have a broadband grant program, but most have awarded only a few tens of millions of grant funding. States without an existing grant program are facing the daunting task of being ready to process this funding. States collectively must hire hundreds of qualified grant reviewers in a hurry – which won’t be easy when those same people can make a lot more writing grant requests.

Supply Chain will Get Uglier. The wait time for fiber is not going to get any better during the year as demand for fiber is growing at a hockey-stick rate. Much of the rest of the supply chain will ease during the year as the roadblocks from factors like the lack of raw materials and backed-up ports will begin clearing.

The FCC Maps Aren’t Going to Improve Much. Congress finally awarded a contract for $45 million to CostQuest Associates to start the process of fixing the broadband maps. The contract award was immediately challenged by LightBox, which put the contract on hold. In a prediction that breaks my heart, I predict the revised maps are still going to have major problems. The new maps will better pin down broadband coverage areas, but ISPs will still be able to report marketing speeds instead of actual speeds.

The FCC Will Not Kill all of the Bad RDOF Winners. I hope this is my worst prediction and never happens, but I think the FCC will award RDOF funding to a few companies that shouldn’t be funded. If the FCC doesn’t clear out the RDOF long-form backlog in the next three or four months, the still pending RDOF awards will really gum up those wanting to file BEAD grants.

Starlink is Going to be Unspectacular. 2022 is supposed to be the year when Starlink goes live and starts serving millions of rural customers. My prediction is that the company will stay in beta mode for most or even all of 2022 and not make the promised big splash. However, customers who get Starlink working (around the trees and mountains) are going to rave about having broadband that is light years ahead of DSL.

Cities and Counties Will Quietly use ARPA Money for Broadband. Many local governments are still hesitant to use this funding for broadband. But as localities see other bolder local governments move ahead, I predict many will quietly follow suit. Most ARPA projects won’t get much press outside the local community.

Technician Shortage Becomes Noticeable. The technician/engineer shortage is already becoming evident and will get worse as 2022 progresses. Small construction firms and small ISPs are already seeing staff lured away by larger firms. Salaries will have to increase everywhere, and the cost of building fiber will increase accordingly.

Broadband Inflation Will be Higher than General Inflation. A lot of the issues causing general inflation will ease, and inflation will slow. However, material and labor shortages will mean more inflation for the broadband industry than for the rest of the economy.

Buy American. Applying strict buy American rules for the use of federal grant funding is going to stir and disrupt an already messy broadband supply chain.

I Will Bring Home a Boxful of Kittens. I’m just seeing if my wife is still reading my blog!

Pew Investigates Pandemic Homework Gap

Now that most students have returned to live classrooms this fall, there is a lot that can be learned from a post-mortem examination of the ability of students to learn from home. Several studies have shown that students without good home broadband fall behind their peers even when school is back to normal, and so the pandemic gave us a good look at the many homes where students didn’t have broadband or computers.

Pew Research Center released the results of a survey last month that looked at the effectiveness and the problems uncovered when we sent kids home to learn.

93% of parents surveyed said that K-12 children received some online learning during the pandemic. That alone is big news because it means that 7% of students didn’t partake in any online learning. This matches what we’ve been hearing. For example, we recently talked to a high school principal in Arkansas who said that online learning went reasonably well but that the high school ‘lost’ 7% of students. The students never logged into online classes, and the households didn’t respond when contacted by the school. We’ve heard the same story in many other counties where some students seemingly dropped off the grid during the pandemic. That’s going to cause problems for years to come.

30% of the parents in homes that tried online learning said that it was somewhat or very difficult to use technology and the Internet needed to take classes from home. I think it’s fair to say that students who struggled with the technology or who didn’t have adequate broadband fared poorly in terms of learning during the pandemic period.

As might be expected, the households that struggled varies by demographics. Low-income homes were twice as prone to struggling with the technology, with 36% of low-income homes reporting the problem. Rural areas (39%) had more problems with technology and the Internet than other groups like urban (33%) and suburban (18%). What’s scariest about this survey response is that almost one in five suburban kids – areas that likely have the best broadband – struggled with technology and the Internet.

About one-third of parents said that children experienced technology issues that were obstacles in completing schoolwork. 27% of parents said students had to try to do homework on cellphones. 16% said students did not have access to computers. 14% said that kids left home to use public WiFi to complete schoolwork and homework. 46% of low-income homes had the biggest technology obstacles compared to 31% of homes with mid-range incomes and 18% of homes with higher incomes.

Black teens were the most heavily disadvantaged during the pandemic. 13% of black students said they were regularly unable to complete homework due to technology issues compared to 4% for white teens and 6% for Hispanic teens.

Household incomes affected the ability to complete schoolwork. 24% of teens from households making less than $30,000 annually said that the lack of a dependable computer or internet connection sometimes hindered them from completing schoolwork, compared to 9% of students living in homes making more than $75,000 annually.

Hopefully, the pandemic is now behind us and won’t close so many schools again – although even now, schools are closing temporarily due to Covid outbreaks. But even as we return to a normal school year, we need to pause and recognize that the students who struggled with schooling from home continue to be disadvantaged compared to their peers even when school is back to normal. Hopefully, we won’t stop caring about the homework gap.

LAA and WiFi

University of Chicago students conducted a study on and near the campus, looking at how LAA (Licensed Assisted Access) affects WiFi. Cellular carriers began using LAA technology in 2017. This technology allows a cellular carrier to snag unlicensed spectrum to create bigger data pipes than can be achieved with the traditional cellular spectrum. When cellular companies combine frequencies using LAA, they can theoretically create a data pipe as large as a gigabit while only using 20 MHz of licensed frequency. The extra bandwidth for this application comes mostly from the unlicensed 5 GHz band and can match the fastest speeds that can be delivered with home routers using 802.11AC.

There has always been an assumption that the cellular use of LAA technology would interfere to some extent with WiFi networks. But the students found a few examples where using LAA killed as much as 97% of local WiFi network signal strength. They found that when LAA kicked in that the performance on nearby WiFi networks always dropped.

This wasn’t supposed to happen. Back when the FCC approved the use of LAA, the cellular carriers all said that interference would be at a minimum because WiFi is mostly used indoors and LAA is used outdoors. But the study showed there can also be a big data drop for indoor WiFi routers if cellular users are in the vicinity. That means people on the street can interfere with the WiFi strength in a Starbucks (or your home).

The use of WiFi has also changed a lot since 2017, and during the pandemic, we have installed huge numbers of outdoor hotspots for students and the public. This new finding says that LAA usage could be killing outdoor broadband established for students to do homework. Students didn’t just use WiFi hotspots when they couldn’t attend school, but many relied on WiFi broadband in the evenings and weekends to do homework. Millions of people without home broadband also use public WiFi hotspots.

LAA usage kills WiFi usage for several reasons. WiFi is a listen-before-talk technology. This means that when a WiFi device wants to grab a connection to the router that the device gets in line with other WiFi devices and is not automatically connected immediately. LAA acts like all cellular traffic and immediately grabs bandwidth if it is available, This difference in the way of using spectrum gives LAA a priority to grab frequency first.

LAA connections also last longer. You may not realize it, but devices using WiFi devices don’t connect permanently. WiFi routers connect to devices in 4-millisecond bursts. In a home where there aren’t many devices trying to use a router, these bursts may seem continuous, but in a crowded place with a lot of WiFi users, devices have to pause between connections. LAA bursts are 10-milliseconds instead of 4-ms for WiFi. This means that LAA devices both connect immediately to unlicensed spectrum and also keep the connection longer than a WiFi device. It’s not hard for multiple LAA connections to completely swamp a WiFi network.

This is a perfect example of how hard it is to set wireless policy. The FCC solicited a lot of input when the idea of sharing unlicensed spectrum with cellular carriers was first raised. At the time, the technology being discussed was LTE-U, a precursor to LAA. The FCC heard from everybody in the industry, with the WiFi industry saying that cellular use could overwhelm WiFi networks and the cellular industry saying that concerns were overblown. The FCC always finds itself refereeing between competing concerns and has to pick winners in such arguments. The decision by the FCC to allow cellular carriers to use free public spectrum highlights another trend – the cellular companies, by and large, get what they want.

It will be interesting to see if the FCC does anything as a result of this study and other evidence that cellular companies have gone a lot further with LAA than promised. I won’t hold my breath. AT&T also announced this week that it is starting to test LAA using the unlicensed portion of the 6 GHz spectrum.

The Convergence Apocalypse

A new term being passed around the industry is the ‘convergence apocalypse’. This refers to the big cable companies and big telcos finally competing, to the mutual detriment of both. We haven’t had widespread competition in the broadband industry since the period from 2000-2005 when DSL and cable modems had comparable speeds. Cable companies and telcos battled it out to gain customers during the period of explosive growth of landline broadband. But even then, it appeared that duopoly cooperation was in place since the telcos and cable companies decided not to seriously compete on price. The cable companies eventually came out on top as cable modem speeds surpassed DSL, and the cable companies have been winning over DSL customers ever since.

But a competitive future might be back on the table. The telcos are building millions of lines of fiber every year. In 2022 the big telcos collectively have announced plans to build fiber past seven million homes and businesses. As fiber overbuilding continues, it seems likely that the telcos are going to begin clawing back customers from the cable companies. The tables have turned, and the public now views fiber as the superior technology. AT&T recently said that within three years of building fiber in a neighborhood that the company achieves a 37% market share and is hoping to achieve 50% within five years.

The stock prices of the big cable companies have thrived for the last decade due to non-stop growth in broadband customers. That growth was fueled by the combination of households deciding that broadband is a necessity and the ability for the cable companies to lure away DSL customers year after year.

If the telcos achieve the kind of penetration rates anticipated by AT&T, then we’re not too far away from seeing cable company broadband customer growth level off and start dropping. But the biggest factor still going for the cable companies is that, even with the aggressive growth of fiber, most cable markets will still have no real competition.

The cable companies are fighting back in an interesting way. NathanMoffetsen recently reported that the cable companies now have over 3 million cellular customers. That’s still a small drop in the giant universe of cellular customers, but the cable companies are winning customers through lower prices. I recently watched Monday Night Football, and every other ad was from Charter pushing an unlimited cellular plan for $29.99 per line for customers buying two lines. Perhaps one of the most interesting ways to fight back against AT&T and Verizon is by putting price pressure on cellular pricing.

The big cable companies are in an interesting position in the cellular market since Comcast and Charter own networks that cover a majority of major metropolitan areas. Both companies have also invested significantly in WiFi that can be used to affordably backhaul cellular traffic using the last-mile network.

Where some folks see a convergence apocalypse, I just see some real competition on the horizon. If the telcos and cable companies seriously go after each other, then the winner will be the public. Competition will lower the prices and profits of the big companies, but none of these companies will face any serious financial problems from competition. What is more likely to happen is that the cable companies will likely see lower stock prices – so maybe a better phrase to describe real competition between the big ISPs is a Stock Apocalypse.

Giving the BEAD Grants to the States

One of the most interesting discussions running around the industry is asking why Congress gave the immense power of the $42.5 billion BEAD grants to the states. Large grant programs in the past have been controlled at the federal level. Of course, the only people who know for sure are those that crafted the language in the Infrastructure Innovation and Jobs Act.

Congress had a number of options for how to distribute this grant funding. They could have given a role to the FCC, NTIA, USDA, or to the States. They easily also have divvied up the money and given some to each of the above – with the concept that this is a chance to see what works the best. The Act has a little bit of spreading the money around. For example, the Act gave an extra $2 billion to the USDA and the RUS ReConnect Grants. The FCC will be riding herd over the $14 billion that has been allocated to the Affordable Connectivity Program that provides discounts on broadband for qualifying low-income households. But the big grant money is going to the states with overall grant rules administered by the NTIA.

I think the awards make it clear that Congress doesn’t trust the FCC to administer a big grant program. It appears that the FCC has sullied its reputation in the way it administered the RDOF awards. Congress has repeatedly heard how unhappy constituents are with that program. Back when the idea of a giant infrastructure bill was first circulated, there was serious discussion about letting the FCC distribute the money in a giant reverse auction – and the first draft of the House bill did just that. Thankfully some sanity prevailed in Congress since that would have been a boondoggle of unprecedented horribleness. The FCC made a lot of blunders with the RDOF awards (as they had blown the CAF II program in earlier years).

It makes sense not to give the money to the FCC. I think the FCC chose the reverse auction because the agency knows it doesn’t have the staff or expertise to review complex and overlapping federal grant requests. But the agency is not supposed to have that kind of staff – the FCC is a regulatory agency that makes and enforces rules. There is nothing in that job description that would entail having a large technical staff capable of administering billions of dollars of grants. I can only hope that somehow this new gigantic funding will dissuade the FCC from holding a second round of RDOF or a 5G reverse auction that is being contemplated at the agency.

It’s clear that some in Congress like the RUS, which is part of USDA, and there have now been several annual rounds of ReConnect grants. But the RUS also doesn’t have a staff capable of quickly processing tens of billions of grants. The ReConnect grant program is paperwork-heavy, and the RUS is known for being deliberate in awarding grants and loans. Deliberateness is a great characteristic when dispensing federal dollars, but it would be a challenge for the RUS to award BEAD grants quickly.

Congress could also have given the grant obligation to the NTIA directly, but the agency has even less staff able to review grant requests than the RUS. It’s hard picturing the NTIA staffing up quickly enough to dispense $42 billion in grants. However, Congress did trust the NTIA to set the policy for the new BEAD grants. It could have given that task to any of the three agencies. The NTIA recently set the policies for the recent ARPA grants, and this probably means that somebody in Congress appreciated that effort.

Giving the money to the states might be the only practical way to dispense this money with any sanity. I’m hearing that state broadband offices across the country are adding significant staff in anticipation of these grants. That will mean many hundreds of grant reviewers and administrators – far more than any of the federal agencies could have mustered in a short period of time.

But giving the money to the states was an interesting choice because each state will put its own stamp on how to spend the money. I know that the NTIA has been given the task of making sure that the grants meet the intentions detailed by Congress in the Act. But I’ll not be surprised to see states push the boundaries of the grant rules or even defiantly disregard them. States know that this is likely the only chance to solve the rural broadband problem, and I don’t picture states failing to award grant money to places that need it, regardless of how Congress wrote the rules.

The 25/3 Mbps Myth

There is no such thing as a 25/3 Mbps broadband connection, or a 100/20 Mbps broadband connection, or even a symmetrical gigabit broadband connection on fiber. For a long list of reasons, the broadband speeds that make it to customers vary widely by the day, the hour, and the minute. And yet, we’ve developed an entire regulatory system built around the concept that broadband connections can be neatly categorized by speed.

I don’t need to offer any esoteric proof of this because every person reading this blog can easily prove this for themselves. Connect a computer directly into your incoming broadband connection and take a speed test multiple times throughout the day. Speed tests taken even minutes apart can vary by more than 10% and, depending upon the technology, can vary far more than that over a 24-hour period. What’s the speed being delivered to and from your home if the speeds you measure vary by 20%, 30%, or 40% over the course of a day? Assigning a single speed to describe your home’s broadband connection is a total fiction.

What do regulators mean when they set a speed definition of 25/3 Mbps? Does that represent the slowest speed that can be achieved, the fastest speed that can be achieved, or the average speed? There is no consensus on that. When a telco says it can deliver a speed of 25/3 Mbps the company is implying that it is capable of that speed, but doesn’t guarantee it. When a homeowner buys a 25/3 Mbps connection they view it as a speed they should be receiving and feel cheated if they get something slower. I’ll be honest – I have no idea what regulators think 25/3 Mbps means in the real world.

I would hope that every regulator understands that speeds are a convenient fiction that the FCC invented as a way to classify homes as having or not having broadband. A home connection that achieves 25/3 Mbps or something faster is considered to be broadband – anything slower is not considered to be broadband.

The big ISPs have always understood that choosing a speed to describe a broadband product is mostly symbolic in nature. Big ISPs regularly take advantage of the official definition of broadband to suit their goals. If a telco wants to create a regulatory block to keep away competitors it will overstate performance and declare its DSL to be 25/3 Mbps. But if having a slower speed means getting a subsidy, the same telco is likely to declare that speeds are less than 25/3 Mbps. Neither of those choices of naming the speed has anything to do with the actual broadband product being delivered – it’s just different manifestations of the speed fiction. Unfortunately, the pretense that ISPs can or cannot deliver certain target speeds has had real-life consequences. There are numerous communities that have been badly harmed by being denied broadband grant funding in the past because a big telco claimed the fictional broadband speeds of 25/3 Mbps.

It seems likely that the FCC will increase the definition of broadband to something like 100/20 Mbps in the upcoming year – and that is not going to stop speed from being an issue. Cable companies spent a lot of lobbying effort during the last year to make sure that the Congressional broadband grants can only be used in neighborhoods that don’t meet the speed requirement of 100/20 Mbps. The cable companies all swear they already meet that speed definition, and that means that the federal grants can’t be used to overbuild an existing cable company. It also means the cable companies can propose to use grant money to extend their coaxial technology outside current markets.

Our firm recently worked with two county seats that are served by big cable companies, and as part of those studies, we had residents take speed tests. With over 1,000 speed tests from cable customers, there were less than a dozen speed tests showing upload speeds faster than 20 Mbps upload, and over half had upload speeds of 10 Mbps or less. While many download speed tests were faster than 100 Mbps, about a fourth of download speed tests were under 50 Mbps. Do the cable companies in these towns provide 100/20 Mbps broadband? The speed tests would suggest that the cable companies aren’t meeting that speed definition for more than a tiny percentage of customers. These two towns (and most places that are served by a cable company) will fail a new FCC definition of broadband that requires 20 Mbps upload.

This all goes to show that a federal definition of broadband is a symbol only, a fiction. We already know that every cable company is going to swear that it meets the 100/20 Mbps definition of broadband – even when actual speed tests show this to not be true. They are going to say that their technology is capable of delivering upload speeds of 20 Mbps and that it doesn’t matter that they are underperforming. The cable companies worked hard to make sure that grants defined upload speeds as 20 Mbps rather than the 100 Mbps that was in the first draft of the Senate infrastructure bill. Just as the telcos manipulated the 25/3 Mbps definition of broadband to meet their purposes, the cable companies are going to do the same with 100/20 Mbps. It’s going to be shocking if any of the $42.5 billion in grants is used to overbuild a cable company – even if a cable company is badly underperforming.

The big ISPs have become masterful at using the federal speed definition to meet their purposes. The big federal grants include the worst of both worlds. They still have a test of defining unserved locations at 25/3 Mbps, allowing telcos to challenge any grant application. And the grants can be used to fund places with speeds up to 100/20 Mbps, allowing the cable companies to argue that the funds can’t be used to compete against them. It’s obvious that the big ISPs had a huge hand in drafting the federal grant language.

For years I have cringed every time I see a federal document that references 25/3 Mbps – because I know that the way that number is being used is a far cry from the broadband actually being delivered to homes. The definition of speed ought to be based on technology and not on the fiction of meeting achieving an imaginary speed goal. If the FCC wants to allow federal grants to overbuild DSL, they should say so. If they want to protect cable companies from competition, they should say so. But a policy that favors or disfavors certain technologies will never fly in a country where lobbyists influence policy. Instead, we’re going to keep seeing definitions of speeds that give the big ISPs the ammunition they need to fight against competition.

The New Web 3.0

There has been a renewed discussion this year in creating what’s being labeled as Web 3.0. – the next generation of how we use the web. First, a little history. Web 1.0 was from 1991 to 2004 when web users were consumers of content, and the web was a series of static websites. Web 2.0 emerged in 2004 as user-created content overtook static content. The big winners in this era have been the huge social media platforms that became some of the biggest companies on the planet.

The original idea behind Web 3.0 was similar to the concept of the semantic web, a concept described in 1999 by web pioneer Tim Berners-Lee. The semantic web was to incorporate software that could understand concepts and semantics and could easily navigate between multiple online platforms to create a personalized web experience for each person. Everybody could use the web in their own way and block out the web they don’t want to experience. Think of the semantic web as each person having an intelligent version of Siri that navigates the web uniquely for each user. The semantic web would simplify people’s lives – restaurant reservations would be made automatically, you’d never run out of pet food, you’d be automatically booked for your annual physical. These are things that web platforms have promised but never delivered.

However, the vision of Web 3.0 being discussed today is something new and different. The concept is now to create a decentralized web based on the following principles:

  • Decentralized, meaning that personal data is not automatically stored in a data center under the control of a third party. Data would be stored at the edge to be shared or kept private by user choice.
  • Open, meaning that web platforms would use software that is open to the world, meaning users would know exactly what is or isn’t being done with their date.
  • Trustless, meaning that two parties don’t need to go through an intermediate trusted party such as a big web site to interact and exchange data. Think of the web as becoming a series of peer-to-peer interactions.
  • Permissionless, meaning that users and suppliers can interact without needing authorization from a third party – think being allowed to use apps that are not pre-approved by Apple.

The concept would be a radical change from the current web. If fully implemented, Web 3.0 would gut the ability of companies like Meta and Google to monetize our personal information unless we choose to give them access to do so. This sort of web would stop many of the practices that make most of us uncomfortable. Apps would no longer be tracking everywhere we drive. Platforms would no longer be automatically mining our data and identifying our friends and family. There would no longer be marketing cookies put onto our devices from every website or app we use.

One needed key to make this work is at least a rudimentary artificial intelligence that automatically and anonymously perform a lot of web functions for people. That’s something that is still a pipedream, with no idea if and when that can ever be fully implemented.

Web 3.0 wouldn’t kill the things that people like about the web today. People would be free to choose to share all of their data and participate in social media platforms the same as today. But a person could also create a private social media group with family with the knowledge that outsiders couldn’t track or monitor what is said within the group. Shopping sites wouldn’t know who you are unless you give them permission or purchase something.

This concept takes us back to what we originally hoped the Web would become. In 2000, nobody imagined the immense power the large web companies have gained through tracking and compiling detailed personal information on each of us. The goal of web 3.0 is to give control of personal data to each person to share or not share as they see fit.

The FCC and Broadband Outages

Comcast had a widespread network outage in early November. The problems started in San Francisco and spread the next day to Chicago, Philadelphia, parts of New Jersey, and three other states. The outage knocked out broadband customers along with Comcast cellular customers. Comcast has never disclosed the reason for the outage and announced only that it was due to a ‘network issue’.

In 2020 CenturyLink suffered an even larger outage that not only knocked out CenturyLink customers but spread into other networks, including Amazon, Cloudflare, and Hulu. The problem was blamed on a software update that blocked the establishment of Border Gateway Protocol (BGP) sessions and impeded broadband traffic routing.

T-Mobile also had a major network outage in 2020 that knocked out broadband customers and also cut off some voice calls and most texting for nearly a whole day. T-Mobile blamed the issue on problems with a leased circuit that was compounded by two previously undetected flaws in third-party software. Reports at the time said that the electronics failed on a leased circuit, and then the backup circuit also failed. This then caused a cascade that brought down a large part of the T-Mobile network.

In 2019 CenturyLink had perhaps the largest outage that knocked out much of its network and customers that relied on the Level 3 network for transport. The company blamed the outage on a bad circuit card in Denver that somehow cascaded to bring down a large swath of fiber networks in the West, including numerous 911 centers.

The FCC investigates big outages from time to time and opened an inquiry in October 2020 in a few of the outages listed above. The FCC also recently adopted a Notice of Proposed Rulemaking to investigate the disaster resiliency plans of major telecom providers to take a harder look at how cellular and broadband carriers make repairs after big storms.

Interestingly, the FCC recently fined T-Mobile $19.5 million for the 2020 outage, but not the other carriers. This is not because T-Mobile’s outage was worse than the others. T-Mobile was fined because they are a cellular carrier and still fully regulated by the FCC. But Comcast and CenturyLink are ISPs and under different regulatory rules.

Oddly, the FCC has very little power to do anything about ISP network outages because the FCC has very little regulatory authority over ISPs in general. The FCC abrogated its authority to regulate ISPs when it killed Title II regulation and handed a few vestiges of regulation to the Federal Trade Commission. The FCC only regulates ISPs tangentially through the specific authority given directly by Congress. Any authority the FCC once had as a result of claiming Title II regulatory authority is gone.

The process has finally started to seat a fifth FCC Commissioner, and the industry speculates that one of the early acts with five Commissioners will be to reinstate Title II authority. This effort might be a little more streamlined in the past because federal courts have already ruled that the FCC can choose to regulate or not regulate broadband.

Unfortunately, any move to regulate ISPs and broadband will only last until we have another shift in administration that wants to kill regulation again. We have ended up in an absurd regulatory merry-go-round where regulating or not regulating ISPs depends on the party that controls the White House. It makes no sense to not regulate ISPs at a time when cable companies have nearly total monopoly power in some markets. Overall, broadband might be the most important industry in the country because it powers just about everything else. Local jurisdictions around the country regulate occupations like nail salon technicians, plumbers, and masseuses, and yet we can’t get our act together as a country to regulate an industry where a handful of giant ISPs openly manifest monopoly behavior.

There is a really simple fix for this. Congress could give authority to the FCC to regulate broadband so that future FCCs or administrations could not undo it. It would only take a simple law that says something like, “The FCC shall regulate the broadband industry for the benefit of the citizens of the United States.” Obviously, lawyers could word this to be more ironclad – but giving the FCC the authority to regulate broadband doesn’t have to be complicated.

25-Gigabit PON

The industry has barely broken ground on 10-gigabit PON technology in terms of market deployments, and the vendors in the industry have already moved on to 25-gigabit PON technology. I know a few ISPS that are exclusively deploying 10-gigabit XGS-PON, but most ISPs are still deploying the fifteen-year-old GPON technology.

As a short primer, PON (passive optical network) technology is a last-mile technology that uses one laser in a core location to communicate with multiple customers. In the U.S., most ISPs don’t deploy GPON to more than 32 customers. The passive name in the technology is due to the fact that there are no electronics in the network between the core laser and the customer lasers. GPON technology delivers 2.4 Gbps of bandwidth to a PON (a group of customers connected to the same core laser). The upgrade to XGS-PON brings something close to 10 Gbps to PON, while the 25-GS-PON will bring 25 Gbps.

The PON technology is being championed by the 25-GS-PON MSA (multisource agreement) Group that has come together to create a standard specification for the 25-gigabit technology. It’s worth a glance at their website because it’s a virtual who’s-who of large ISPs, chip manufacturers, and electronics vendors.

I’m not hearing a lot of complaints yet about ISPs who are seeing GPON technology being overwhelmed in residential neighborhoods. I’ve asked recently, and most of the small ISPs I queried told me that individual neighborhood PONs average about 40% utilization, meaning that 40% of the bandwidth to customers is being used at the same time. ISPs start to get worried when utilization starts routinely crossing 80%, and ideally, ISPs never want to hit 100% utilization, which is when customers start getting blocked.

The cellular carriers were the first champions of 10-gigabit PON technology. This is the most affordable way to bring multi-gigabit speeds to small cell sites. The network owner can deploy a 10-gigabit core and communicate with multiple small cell sites without needing the extra field electronics used in a Metro Ethernet network. The 25-gigabit technology is aimed at cell sites and other large bandwidth users.

The technology is smartly being designed as an overlay onto existing GPON and XGS-PON deployments. In an overlay network, a GPON owner can continue to operate GPON for residential networks, can operate XGS-PON for a PON of businesses with larger bandwidth requirements. The 25GS-PON would be used for the real heavy hitters or perhaps to create a private network between locations in a market.

I’ve been thinking about the benefits of 25GS-PON over the other current GPON technologies.

  • This is a cheaper technology than the alternatives. The MSA group has designed this to be a natural progression beyond GPON and XGS-PON. That means most of the components of the technology benefit from the huge manufacturing economy of scale for PON technology. If 25G-PON costs are low enough, this could spell the eventual end of Metro Ethernet as a technology.
  • It’s a great way to bring big bandwidth to multiple customers in the same part of a network. This technology can supply bandwidth to small cell sites that wasn’t imaginable just a few years ago.
  • The technology is easy to add to an existing network by sliding a new card into a compatible PON chassis. That means no new racks in data centers or new shelves in huts.

Electronics manufacturers have been frustrated by how long the GPON technology has remained viable – and in many applications might be good for years to come. Telecom manufacturers thrived in the past when there was a full replacement and upgrade of electronics needed every seven years. Designing 25-gigabit PON as an overlay is an acknowledgment that upgrades in the future are going to be incremental, and upgrades that don’t overlay onto existing technologies will likely be shunned. ISPs are not interested in rip and replace technologies.

The 25GS-PON technology might become commercially available as early as the end of 2022. There have already been field trials of the technology. After that, the vendors will move on to the next PON upgrade. There’s already talk of whether the next generation should be 40-gigabit or 100-gigabit.

Buy American and Federal Grants

Near the bottom of the Infrastructure Investment and Jobs Act, starting on page 2315, is a requirement that any infrastructure funded from federal funds must comply with the Build America, Buy America Act. This applies to the $42.5 billion in broadband infrastructure included in the IIJA, but also applies to all other infrastructure projects that includes federal funding. The IIJA says as of the date of enactment of this Act, domestic content procurement preference policies apply to all Federal Government procurement and to various Federal-aid infrastructure programs.

I think this clearly means that Buy American rules apply to federal infrastructure projects awarded after November 18, 2021, the date the IIJA was published in the Federal Register. This would include RDOF funding, ReConnect grants, the NTIA grants, and anything else awarded after that date. I’ll have to leave this up to the lawyers, but this also could apply to state and local grants awarded before that date but not yet constructed, such as CARES or ARPA projects.

The concept of buying American has been around since 1933, when the original Buy America Act was passed by Congress that applied specifically to federally-funded projects to build roads and railroads. That specific law was aimed at making sure that railroads used American-made iron and steel for rails, train engines, and railcars.

The Buy America concept was first applied to telecom in the 2009 ARRA stimulus grants. Those grants required that a substantial amount of the raw materials used to build broadband networks complied with the Buy America Act. At that time, it was nearly impossible to buy electronics that complied with the Buy America Act, and I recollect that the NTIA issued a blanket pardon from parts of the Buy America rules (but that’s subject to verification).

This new IIJA legislation puts a major emphasis on buying American. One of the intentions of the Act is to provide incentives for manufacturers to bring factories and jobs back to the U.S. Consider the following language from the IIJA:

United States taxpayer dollars invested in public infrastructure should not be used to reward companies that have moved their operations, investment dollars, and jobs to foreign countries or foreign factories, particularly those that do not share or openly flout the commitments of the United States to environmental, worker, and workplace safety protections; in procuring materials for public works projects, entities using taxpayer-financed Federal assistance should give a commonsense procurement preference for the materials and products produced by companies and workers in the United States in accordance with the high ideals embodied in the environmental, worker, workplace safety, and other regulatory requirements of the United States;

The Act lists specific materials and components that should be sourced to American companies, including steel, iron, manufactured products, non-ferrous metals, plastic and polymer-based products (including polyvinylchloride, composite building materials, and polymers used in fiber optic cables), glass (including optic glass), lumber, and drywall.

That list covers almost every component of building a fiber network. Fiber optic glass must be American-made, as must be the material used in fiber-optic sheaths. Conduit must be American-made. The definition of ‘manufactured items’ in the Act covers all electronics.

The IIJA goes on to define the specific rules for defining American-made. Construction materials like fiber optic cable and conduit must be 100% made in the U.S. At least 55% of the cost of the components for manufactured goods must be American-made. This last requirement is going to cause consternation for equipment vendors which are going to somehow disclose the source and what they pay for each component of electronics. In today’s complex supply chain this isn’t going to be easy. This gets even more complex for supply houses that buy and assemble various components into ready-to-use electronics assemblies. This will mean more paperwork for the industry – everybody that builds a project that uses federal funding must be ready to prove they comply with the law.

There are ways for federal agencies to get waivers from these rules – but the legislation makes it clear that waivers need to be exceptions and not routinely or easily granted. The intention of this law is to force vendors to change procurement practices and to buy raw materials and components from American sources. Since the law specifically called out the components of fiber optic networks, it’s not going to be easy to get waivers.

This is likely to cause disruptions in the short run as electronics manufacturers scramble to meet the 55% rule. It’s not hard to imagine that these rules might further disrupt the current supply chain problems as vendors scramble to meet these requirements. But in the long run, these rules are great. We need to buy from American companies, support American jobs, and move manufacturing back to the U.S.