ISP New Year’s Resolutions

It’s the time of the year for New Year’s resolutions, and I asked some of my ISP clients if they are carrying unfinished tasks into the new year. Some of my clients laughed and told me that some of these tasks have been on their list for years.

Some of the wish list ‘resolutions’ I heard for 2026 included:

Integrate Records. Several ISPs said their customer and mapping records are less than ideal. They knew what a fully integrated records system should look like, where every record associated with a given customer is available at the fingertips of staff. They also want a system where the details of the physical network are integrated with customer records to be able to quickly identify where to look when there are outages or troubles. They also want a system where every new customer event and any new construction are easily and automatically integrated into existing records.

Reduce Truck Rolls. Several ISPS said they want to find ways to reduce truck rolls. They send trucks too many times when a problem could have been handled remotely. Conversely, they want to provide great customer service, and they want to send trucks when needed. In a competitive environment, they aren’t comfortable with charging customers for unneeded truck rolls.

Should They Raise Rates? Several ISPs are struggling with the idea of raising rates. They see that big ISPs are still raising rates. They are experiencing higher costs and know they should raise rates, but are still hesitant to do so.

Understand Profitability Better. Related to the question of raising rates, ISPs told me they would like to understand the factors that most impact their profitability. Are there expenses or functions they can drop that will save money? ISPs said their accounting system is a good way to measure monthly margins, but that they don’t get enough detail to fully understand the costs of operating the business.

Benefits Getting Too Expensive. Some ISPs say they are troubled by seeing the cost of health insurance and other benefits growing far faster than other costs. They struggle with what to do about it, and don’t want to cut benefits, but are worried about the cost trend.

Improve Sales to Businesses. Several ISPs told me that they have never felt fully comfortable selling to businesses in the same way that they sell residential broadband.

Dealing with Churn. Several ISPs said they struggle with finding a solution for dealing with churn. Too many times, a customer will move, and they don’t reach the new tenants until it’s too late.

Clear Out Inventory. A few ISPs laughed and told me that their accountants want them to clean out accumulated inventory, but that they dread the paperwork that comes with trying to quantify the mass of old electronics and construction materials that are no longer useful.

If you are an ISP, what unfinished tasks or goals are on your list?

Smartphones and Digital Literacy

A friend of mine, Frederick Pilot, recently asked me an interesting question. Is digital literacy that comes from using a smartphone the same as digital literacy from using a computer? It’s a great question, because the majority of Internet users in the world only have broadband access through a smartphone. In developing nations, 90% of broadband users only have access to a smartphone. In the U.S., 16% of adults only use a smartphone to reach the Internet.

There are skills needed to master using a computer that can’t be learned from using a smartphone. Computer users learn to use a mouse and to type – even people who speak to a computer need the mouse and keyboard. People working on computers learn how to create, save, and manage files. Computer users learn how to use operating systems and software programs.

By contrast, smartphone users mostly learn how to use apps. While some apps are complex, the skills learned generally apply mostly to the specific app.

It’s clear that learning how to navigate an app ecosystem is very different than mastering a computer ecosystem. Of course, some things are the same for both sets of users. Streaming video or shopping on websites is largely the same for everybody. Smartphone and computer users have email accounts and can use social media.

A key question is the degree to which only using a smartphone prepares somebody to work in a computer-based work environment. The biggest issue with smartphone-only users is that they have not learned to use a keyboard to type. It’s hard to imagine many computer-related jobs that don’t require at least some typing.

Interestingly, there are many work functions today that look more like apps than like spreadsheets or word documents. I recently visited the doctor for my annual physical, and they’ve converted to a system that captures and transcribes what the doctor says as notes in the patient history. Much of the rest of the effort of using the system means clicking through a bunch of forms and checkboxes. But the doctor and staff still need to type. The doctor edits the notes if they aren’t accurate, and some of the forms require a typed response. This is a new system, and I have to imagine that over time, the amount of typing needed will decrease. My doctor said that his favorite feature is that the system always spells drugs and medical terms properly.

Training people to use a computer has changed a lot in recent years. It wasn’t too long ago when computer training meant learning how to use a word processor and a spreadsheet. People who train others how to use computers tell me they take a more practical approach today, and that training involves things like learning mouse basics; learning basic keyboard skills; learning how to create, find, save, and organize files; learning how to navigate an operating system; Internet basics like searching and using a browser; security awareness and how to avoid scams; and basic troubleshooting and what to do when things go wrong. Much computer training today is personalized and teaches a person to use the web functions that are most important to them, like using a banking website.

None of this discussion answers the original question, which asks if smartphone users are digitally literate. I’m sure that many smartphone users are fully literate in terms of being able to navigate the web. But that doesn’t mean they have the digital skills that employers are looking for. And that begs the question of what it means to be digitally literate.

100 Years of Bell Labs

When I first entered the industry in the 70s, Bell Labs held an exalted place in the industry that was responsible for inventing and perfecting the technologies we all used. Bell Labs was founded and owned by the giant AT&T monopoly, and was operated with the brilliant concept of hiring the smartest people and letting them pursue research related to technology. Much of the research was funneled towards communication technologies, but covered a wide range of scientific and technical breakthroughs that benefited numerous industries.

Some of the key Bell Labs discoveries that benefited communications include:

  • The Transistor. Transistors replaced bulky vacuum tubes and were the basis for the microelectronics revolution.
  • Shannon’s Information Theory. This was the mathematical foundation of the digital age and defined how to treat data as a measurable entity (bits), and addressed data uncertainty, noise, efficient data compression, and transmission.
  • Fiber Optics. Bell Labs researched optical waves and turned that research into a technology for transmitting large amounts of data using lasers. Bell Labs discovered and developed erbium-doped fiber amplifiers (EDFAs), which were crucial for boosting signals over long distances, and that led to the development of the internet backbone.
  • The First Communications Satellite. Bell Labs designed and built Telstar, the world’s first communication satellite. This venture also included breakthroughs in solar cells and in travelling-wave tube transponders that amplified communications signals to reach Earth.
  • The Cellular Network Concept. Bell Labs scientists developed the concept of arranging wireless networks into cells. The Lab went on to develop the technologies used for 1G, 2G, and 3G cellular networks.
  • UNIX and the C Programming Language. Bell Labs developed early programming languages, which became the basis for modern programming.
  • Digital Signal Processing (DSP): Scientists developed the concepts, algorithms, and hardware used to develop the first single-chip digital processor that has become the basis for chips used for most modern electronics.

 Bell Labs researchers earned nine Nobel Prizes and pushed the boundaries of physics, computing, and telecommunications. IEEE recently celebrated some of the achievements of Bell Labs in areas other than communications, which include:

  • Molecular Beam Epitaxy. This was a chip-making process that is key to the manufacture of modern chips and lasers.
  • Fractional Quantum Hall Effect. This physics breakthrough defined how electrons could become entangled, which led to the development of quantum computing.
  • Convolutional Neural Networks. This is a specialized deep learning model inspired by the human visual cortex, which has become the basis for modern AI.
  • Super Resolution Fluorescence Microscopy. This is a series of techniques that allow images to have resolution higher than the limits imposed by the diffraction limit of light, which has had major benefits in biological research.
  • Charged-coupled Device. This is a light-sensitive integrated circuit that captures images by converting photons into electrons, and which is the basis for digital imaging, medical imaging, and modern astronomy.

Bell Labs is now owned by Nokia, which acquired the company when it purchased Alcatel-Lucent. Lucent was the technology spin-off formed at the breakup of AT&T into the Baby Bell companies.

Congress Active with Broadband Bills

We’re near the end of the year, and Congress is recessed until the new year. That hasn’t stopped Congress from introducing interesting new bills related to broadband. Any bill introduced in the first year of Congress is not automatically carried over to the second year session, but I assume these new bills are meant for deliberation in 2026.

Support for Non-Deployment Funds. Senators Roger Wicker (R-MS) and Shelley Moore Capito (R-WV) introduced the Supporting U.S. Critical Connectivity and Economic Strategy and Security for BEAD Act. This legislation would authorize States to use any remaining BEAD non-deployment funds that were not used to build infrastructure. The bill directs NTIA to give these funds to States to support functions like enhancing public safety, improving network resiliency, strengthening national security, and developing a qualified workforce for emerging technologies. This is a major issue since non-deployment has grown to over 21 billion, which is half of the $42.5 billion BEAD funding.

To some degree, this law feels redundant because it reiterates the same use of non-deployment funds that was directed in the original IIJA legislation that created BEAD. The need for this bill is only an issue because NTIA has been referring to the monies not used for broadband deployment as ‘savings’, which they want to return to the U.S. Treasury. If enacted, this would be Congress’s way of emphasizing that it meant what was written in the original law. If enacted, it also means that a lot more of the BEAD funding could have been used to build fiber and other long-term technologies instead of going to satellite broadband.

Expand Mental Telehealth. Representatives Andrea Salinas (D-OR) and Diana Harshbarger (R-TN) reintroduced the bipartisan Home-Based Telemental Health Care Act. If enacted, the legislation would expand access to telehealth services, including mental health and substance use care. The legislation is aimed at rural Americans who have barriers to in-person care, especially for individuals working in the farming, fishing, and forestry industries.

The legislation would create a new grant program that would provide funding for mental health and substance use care for people living in designated Health Professional Shortage Areas. The grants would be managed by the Department of Health and Human Services in consultation with the U.S. Department of Agriculture. Funding could be used to expand telemental health services, including providing broadband access and devices to use telehealth technology. The grants would also explore the feasibility of expanding the program to in-person services. The bill authorizes $10 million in grants for fiscal years 2025 through 2029.

Sunset Section 230 Immunity. Senators Lindsey Graham (R-SC), Dick Durbin (D-IL), Chuck Grassley (R-IA), Sheldon Whitehouse (D-RI), Josh Hawley (R-MO), Amy Klobuchar (D-MN), Marsha Blackburn (R-TN), Richard Blumenthal (D-CT), Ashley Moody (R-FL), and Peter Welch (D-VT) introduced the Sunset Section 230 ActThe legislation would repeal Section 230 of the FCC rules two years after the date of enactment. Section 230 was created in 1996, as a part of the Communications Decency Act. The purpose of Section 230 is to grant limited immunity to online platforms for user-generated content. Section 230 also shields online platforms from any damages from good-faith efforts to moderate or block objectionable content.

The stated purpose of the new legislation is to allow the public to hold platforms accountable for allowing illegal content, child exploitation, and misinformation, based on the underlying premise that the big web platforms currently have near-immunity for damages that arise from their “profits over people” operating model. This is going to be a controversial law, and opponents of the legislation argue that the law will stifle free speech, force platforms to over-censor to avoid massive lawsuits, harm small online platforms, and fail to address underlying issues of harmful content amplification by big tech.

 

Security as an ISP Service

Several industry news outlets are reporting on a recent survey from Parks Associated that shows that 19% of homes with broadband have a professionally monitored security system, and another 7% pay for partial security services like video storage monitoring alerts.

A few other interesting statistics related to home security. 33% of broadband subscribers have a smart camera of some sort. Roughly 78% of homes that own a security system pay for an external service, such as professional monitoring, self-monitoring, or video storage. The Parks research shows that the average fee for home security services in $54 per month.

A decade ago, a number of my ISP clients offered security services. The ISPs either sold or charged a monthly fee for the security hardware and largely outsourced the monitoring to one of the big security companies like ADT, Brinks, or Vivint.

I recently looked at the products and prices of ISPs of all sizes across a several-state area. I was surprised to find that almost none of the ISPs offer security services today. That surprised me because a decade ago, I would have found a quarter of ISPs offering security services.

There are various reasons why small ISPs exited the business. I know two ISPs that sold their security customers to one of the big security companies. They told me that the big companies were offering a sales price per customer that they couldn’t turn down. When I see the average monthly fee of $54, I can understand why security companies are willing to buy existing customers.

Some of my clients were never comfortable with the financial risk of something going wrong with a security system. They were uncomfortable sending security monitoring to a distant company they didn’t know. The downside risk of a big lawsuit from a security system failure felt larger than what the revenue stream could justify.

A lot of ISPs were not comfortable selling hardware and software systems that they didn’t know a lot about. Some got frustrated when vendors suddenly stopped supporting the hardware they had chosen. A lot of ISPs were uncomfortable with the entire process of selling expensive systems to homeowners at a markup.

ISPs in the security business said that the business required a lot of truck rolls and meant answering a lot of calls from customers. I think some of the ISPs in the security business figured it was less profitable because of these extra costs.

I remember that fifteen and twenty years ago, the whole ISP industry spent a lot of time talking about wanting to be something more than a dumb pipe provider. They believed that ISPs that only sold broadband connections and nothing else had a bleak long-term future.

And yet, my recent investigation of ISPs showed across anentire region of the country where almost every ISP is a dumb pipe provider. Most ISPs are now comfortable with this business model. I can think of several changes that have made ISPs more comfortable with this concept. One is that ISPs generally have a much larger market penetration rate than they did a decade ago. In most markets, roughly 90% of homes buy some sort of broadband, while penetration rates twenty years ago were a third of that. ISPS generally also charge a lot more today. When broadband was a new product, pricing was often set to lure customers to try broadband and to stay on the network. An ISP with a good broadband product and good customer service doesn’t have the same worries it had when broadband was a new product.

I know some ISPs who still happily offer security services. I also know some who offer unique services. For example, a few operate in seasonal areas and offer cheap packages of cameras along with water and fire monitors for absentee landlords as a way to convince them to pay for broadband all year. But the bottom line is that most ISPs seem to be happy being dumb pipe providers and aren’t willing to pursue other product lines cause a lot of work or that that have a questionable return.

Seven Stages of the Internet

In October, Dr. Mallik Tatipamula, the CTO of Ericsson, and Dr. Vinton Cerf, a VP and Chief Internet Evangelist for Google, published an article in IEEE Spectrum that postulated that there will be seven stages of the Internet over time.

They say that we have already experienced the first three stages, which include the original Internet, integrating the Internet into mobile devices, and extending the Internet to connect to Internet of Things (IoT) devices. The article postulates about what comes next.

The following are the seven stages:

The Original Internet. The objective of the original Internet was to connect computers and servers to share information. This evolved into the early World Wide Web, which largely democratized access to information.

The Mobile Internet. This was the process of developing apps to take advantage of the ability of mobile phones to connect to the Internet. This has evolved into an explosion of social media and transformational applications, like the ability to make cashless payments using phones. This opened the Internet to those around the world.

The Internet of Things (IoT). This phase of the Internet extended connectivity to machines other than computers and phones. The world is now full of smart appliances, connected cars, sensors, smart factories, and smart city applications.

The Internet of Agents (IoA). We’re now entering the next phase of the Internet where connectivity will be ubiquitously extended to AI agents that perceive, reason, and collaborate. Virtual AI agents include things like digital assistants, coding copilots, workflow orchestrators, and trading algorithms. Physical AI agents will manifest in devices like autonomous cars, drones, industrial robots, and smart medical devices.

The Internet of Senses (IoS). They predict the next phase of the Internet will integrate wearables that extend human perception of touch, taste, and smell. For example, a potential buyer will be able to ‘feel’ the texture of clothing online, or smell a perfume before buying it. These changes will come through advances in haptic wearables, digital olfaction, and brain-computer interfaces.

The Ubiquitous Internet. This will arrive when there is seamless global access to broadband across land, sea, air, and space. Every person on the planet and their devices will have access to the Internet.

 The Quantum Internet. The final stage of the Internet will integrate quantum information into the Internet using qubits and quantum entanglement. This will create ultra-secure communications that are resistant to interception and hacking, and will have unprecedented precision in measuring time, motion, and the environment.

eduroam – Student Broadband Access

The largest WiFi network you may have never heard about is eduroam. This is a global WiFi roaming network operated by and for the educational community. The eduroam network is huge and is currently available at 38,000 locations in over 100 countries and territories. In 2024, the network logged 8.4 billion authentications of users joining an eduroam WiFi network.

In the U.S., over 3,800 locations representing 1,000 educational institutions (universities, colleges, and research facilities) are part of the eduroam network. In recent years, sponsored by the folks at Internet 2, eduroam has been extended to 1,461 hotspots for K-12 schools, libraries, museums, and community spaces. The K-12 eduroam network is active in Arizona, Connecticut, Michigan, Minnesota, Nebraska, Nevada, Oregon, Utah, and Washington. Missouri and Wisconsin are in the process of joining the network.

eduroam operates by letting any student with an eduroam password log in to use WiFi at any node on the network. eduroam stresses security for student and their data. Joining the network doesn’t use a web portal and uses end-to-end encryption using 802.1X, which is an industry standard for port-based network access control (PNAC). Each participant organization must operate a RADIUS authentication server, and a request to join the eduroam network is routed to the home institution to verify the authenticity of the user. User data is only stored at the home location, and nothing is kept on the servers at a remote location.

The network can be expanded as needed to handle emergencies. For example, during the pandemic, many participating institutions established eduroam hotspots in parking lots or common spaces in college communities so that students could access the educational WiFi network.

The benefit to students is obvious. A K-12 student who is part of eduroam can gain access at any node on the eduroam network. These may be installed outdoors at all schools in a community, in libraries, or in a public space of some sort. It gives students access to multiple locations to connect to the school network outside of the home. Students on vacation can gain access if they are near any of the many nodes on the network.

There is a major advantage to universities using the system. Visiting students and professors can gain access to free WiFi without having to ask for credentials from the institution they are visiting. eduroam also comes with a full suite of reports on usage and diagnostics for each institution.

eduroam is being used around the world to extend broadband access. This article documents an eduroam effort in Uganda to extend broadband. The typical user in Uganda uses only 1.7 GB per month of broadband due to cost and data caps on WiFi usage. eduroam was originally extended to 300 locations in Kampala, the capital, like libraries, cafes, hostels, and other public spaces. In 2022, eduroam was extended to 18 more towns with support from the Internet Society Foundation BOLT program. To overcome local challenges, many of the 600 remote hotspots are solar-powered.

 

Let the 6G Hype Begin

In case you haven’t been paying attention, wireless vendors are busy working towards the introduction of 6G starting around 2030. The industry has introduced a new generation of cellular technology every ten years since the first 1G network was introduced in 1981.

I’ve been reading a lot of industry press on the upcoming 6G generation of cellular. I have to admit that some of the claims gave me a good laugh, because the vendors in the industry are touting a lot of potential applications for 6G that seem to be a stretch, just like happened during the lead-up to 5G.

Before describing a few of the promises I’ve been reading for 6G, let me remind you of some of what we were promised with 5G that never really materialized. 5G was touted to be bringing:

  • A superfast network since 5G will enable clusters of 5G small cell sites that will bring the network close to everybody.
  • Super-low latency of 4 milliseconds, even in moving vehicles. It was promised that 5G would be able to compete with fiber for functions like real-time gaming and stock trading.
  • Speeds up to 10 Gbps by the widespread introduction of frequencies between 20 and 60 MHz.
  • A greatly increased capacity for simultaneous connections that would mean 5G subscriptions for cars, smart watches, and the many 5G-enabled smart devices in the home.
  • 5G would enable new technologies like stores having 5G-enabled hologram displays throughout a store. Experts envisioned a 5G network strung along every street and road to enable smart self-driving cars. There was even talk about being able to use 5G to enable medical operations using robots conducted by remote doctors.

The coming introduction of 6G also includes a lot of claimed benefits. 6G will:

  • Enable immersive communication and human-machine interactions. Use cases include immersive eXtended Reality (XR), remote multi-sensory telepresence, holographic communications, haptic sensors and actuators, and multi-sensory interfaces.
  • Lower operator costs will mean affordable and meaningful connectivity for all. This means universal coverage, including sparsely populated areas. 6G will create a seamless interface between terrestrial and non-terrestrial networks.
  • Be able to connect to a massive number of devices that will enable smart cities, smart cars, environmental monitoring, and sensors for agriculture. (Sounds like the same claim made for 5G).
  • Will enable connections to smart machines for the remote operation of robots, autonomous factories, and the creation of digital twins for factories, health care, and other complex use cases.
  • Peak data rates between 50 and 100 Gbps.
  • A target air interface latency between 0.1 ms and 1 ms.
  • Terrestrial-based locating technologies to locate objects within 1 to 10 centimeters.
  • AI-related capabilities to support distributed data processing, distributed learning, AI computing, AI model execution, and AI model inference.

Just like with 5G, the real-life implementation of 6G will be determined by the functions that wireless carriers can monetize. 5G is outperforming the hype in some areas, and most urban 5G networks today are considerably faster than the 100 Mbps goal included in the early 5G hype, yet most of the promised 5G functionality never materialized when carriers found that customers prefer free WiFi to paying for more cellular subscriptions. The same is going to be true with 6G. It’s hard to imagine that introducing 6G will automatically trigger widespread use of multi-sensory telepresence or somehow bring cell towers to rural America. But you can’t blame the vendors who want to get carriers excited about 6G and be willing to pay for the upgrades.

Is the FCC an Independent Agency?

FCC Chairman Brendan Carr recently told Congress that he doesn’t believe that the FCC is an independent agency. The FCC went so far as to remove the term independent from its website. The bottom line of Chairman Carr’s opinion is that he believes the FCC should take direction from the White House.

It’s an interesting position that contradicts the long-standing intentions that the FCC, and many other federal agencies are independent, meaning that they don’t take directions directly from the Administration, but are required to follow whatever enabling laws and rules established by Congress. There are a number of independent agencies other than the FCC, including the EPA, SEC, Federal Reserve, NASA, CIA, FTC, SSA, and NTSB.

There are several key characteristics of independent agencies. First, they are not part of, and don’t report to any of the fifteen cabinet departments like State or Treasury. Independent agencies were generally established by Congress to be somewhat shielded from political pressure. For example, it’s not easy for the President to fire the head of an independent agency. The agencies are often structured with a multi-member Board or Commission, which typically includes rules that require representation from both parties. Some agencies like the SEC and the FCC are accorded rule-making power within a specified range of issues.

The FCC was created by Congress with the passage of the Communications Act of 1934. The agency has been directed by Congress to regulate radio, television, wire, satellite, cable, and the Internet. The Act did not include language that specified the FCC was independent. The independent status is inferred from the structural provisions in the Act that define how the agency operates. The relevant language appears in Section 4(a) of the Act (codified as 47 U.S.C. § 154(a)), which establishes the structure of the Commission. The Act created a commission of five (originally seven) members who are appointed by the President and confirmed by the Senate. The Commission must be bilateral, and no more than three members can be from the same political party. Commissioners serve for fixed, five-year terms. The FCC is required to follow laws passed by Congress aimed specifically at the agency.

The Supreme Court has explored issues related to independent agencies over the years. Supreme Court rulings, like Humphrey’s Executor v. United States (1935), defined a key element of an independent agency to be a lack of explicit legislative language giving a President the power to remove commissioners at will (i.e., for any reason). Instead, the ability to remove commissioners is widely understood to be limited to specific reasons like “inefficiency, neglect of duty, or malfeasance in office.” This structure of independent agencies is done deliberately to insulate agencies from direct presidential control and ensure decisions are based on the public interest rather than political pressure.

Chairman Carr’s statements are a direct challenge to Congress. Historically, independent agencies like the FCC are given general marching orders from Congress through legislation, but even then, the agency is free to interpret specifically how to enact laws. Chairman Carr says that he feels empowered to take direction directly from the White House, and it seems likely this will eventually trigger a showdown. At some point, Congress will have to assert its authority or cede its power to the Administration.

The FCC has never been free from politics, because almost nothing in Washington D.C. can be. The FCC Chairman has traditionally been from the same party as the White House and is typically sympathetic to policies of the administration. But there has always been an uproar if an FCC Chairman has been accused of directly taking direction from the administration. An example of this happened when Republicans accused Chairman Tom Wheeler of too closely following the White House direction on the issue of net neutrality.

The long-term repercussions of a political FCC are not good for the industry. While ISPs, carriers, and programmers all have a wish list of regulations they don’t like, there has always been a huge benefit for regulated companies to have regulatory certainty, which means that rules don’t change drastically with every change of administration. Regulated companies might complain loudly about being overregulated, but they benefit financially from knowing the rules, since this allows them to develop long-term strategies. Every large ISP will quietly admit that regulatory certainty is far better for them than rules that change with each Administration.

BEAD on Hold?

It appears that NTIA missed an important step when it generated the new BEAD rules in June in the BEAD Restructuring Policy Notice. That is the document that changed the scoring of BEAD grants from using a dozen different scoring criteria to choose grant winners to a new method that focused on the character of the proposed technology (priority or not) and the cost per passing.

On December 14, NTIA got a ruling from the GAO that the changes made by NTIA in the Policy Notice are outside of the scope of NTIA’s authority. According to the decision from the GAO, a major change like the one implemented by NTIA requires approval by both Congress and the Comptroller General.

Agencies like NTIA are subject to the Congressional Records Act (CRA), which defines the administrative process that government agencies must follow to change rules. The GAO says that NTIAs Policy Notice implements, interprets, and prescribes law or policy, which triggers provisions of the CRA. The Policy Notice not only affected changes within NTIA of how it administers the BEAD program, but it changed the process of how Eligible Entities (State Broadband Offices) go about seeking funding under the program.

The GAO letter lists the possible ways that NTIA could be exempt from seeking Congressional approval and concluded that none of the exemptions apply to NTIA’s Policy Notice changes. The key trigger for the GAO ruling was that the rule changes created a substantial effect on non-agency parties, meaning States and ISPs.

The conclusion of the GAO is as follows: “The Policy Notice is a rule for purposes of CRA because it meets the definition of a rule under APA and no CRA exception applies. Therefore, the Policy Notice is subject to CRA’s requirement that it be submitted to Congress and the Comptroller General before it can take effect.

The bottom line of this ruling is that NTIA had no authority to unilaterally change the BEAD rules in such a drastic fashion. The BEAD rules in the IIJA legislation were specific, and the changes NTIA ordered with the Policy Notice were significant enough to require NTIA to seek Congressional approval before making the changes.

This is an interesting twist. In normal times, this would mean that NTIA would have to put BEAD on hold until this is resolved. NTIA would not be able to enforce the changes in the Policy Notice, and if Congress didn’t approve the NTIA changes, the BEAD program would probably reset to the status in June before the Policy Notice. It would mean that all of the changes to grant scoring and the requirements for States to determine priority technologies would be invalid. It would means all tentative grant awards made under the revised rules are invalid. States would probably have to re-score grant applications under the original BEAD rules for selecting winners.

But we don’t live in normal times, and this Administration is currently ignoring the rules of the Congressional Records Act in many other venues and programs. So what does this mean? It may mean nothing, and NTIA might just ignore this GAO decision. This might trigger action from Congress. There has been a lot of unhappiness that the amount of grant awards was trimmed so drastically. This decision certainly gives an actionable reason for anybody who wants to take NTIA to court to halt the BEAD process during litigation. Like everything associated with BEAD, awards made under the new rules are going to be under a cloud, and that makes everybody uncomfortable.