AT&T Feels Sorry for Cable Companies

In a recent interview given to Diana Goovaerts of FierceTelecom, Chris Sambar, the AT&T EVP of Technology Operations said that the company is not worried about competition from cable companies. He said that AT&T’s fiber technology, which is capable of symmetrical 10-gigabit speeds is far beyond the capability of the cable companies.

He rightfully identified that cable companies must spend a lot on DOCSIS 4.0 to come close to catching up with fiber. What he didn’t mention is that the new cable technology is probably five years away from being market-ready. His zinger in the interview was when he said, “I almost feel bad for them (the cable companies)”.

This is interesting because we haven’t seen any real trash-talking between telcos and cable companies in decades. There was a lot of noise when DSL and cable modems both had 1 Mbps download capabilities, and then again when Verizon first launched FiOS. This quote is going to be talked about in every cable company board room in the coming months because it encapsulates an industry of fiber providers that are not afraid of tackling the cable companies head-on.

The cable companies have had an unprecedented run of clobbering DSL in the market and becoming near-monopolies in most urban markets. My firm hasn’t done a survey in several years where the cable company hasn’t captured at least two-thirds of broadband customers in an urban market.

But as AT&T and other telcos undertake an aggressive fiber overbuilding program, the industry is about to change. AT&T alone plans to pass an additional 15 million homes and businesses by the end of 2025. We also see aggressive buildouts planned by Verizon, Frontier, Windstream, Consolidated, and many others.

AT&T’s CEO John Stankey was quoted last year saying that the company believes that it will gain at least a 50% market share within three years after building fiber in a neighborhood. Some of those customers will be AT&T DSL customers converted to fiber, but a lot of the customers are going to be coming from the cable companies.

If the broadband world only consisted of the cable companies and the big telcos, we could pass off this latest episode as posturing by two industries that intend to continue to share duopoly market power. Telcos will win back customers with fiber, but if the two big incumbents were the only competitors in markets, then after a few years, we’d see a new equilibrium with telcos bigger than today. That’s what we’ve seen in the Northeast in the years since Verizon built its FiOS fiber – Verizon and the cable companies reached an equilibrium where each enjoys high prices and where both are profitable.

But the world is changing around the two big sets of incumbents. There are other competitors edging into urban broadband markets. For example, in the fourth quarter of 2021, T-Mobile added 224,000 customers to its fixed cellular home broadband. While this is being offered in rural areas, T-Mobile says most of its gains are coming from suburban and urban markets where the product offering of decent 100 Mbps speeds and low prices is peeling customers from both the cable companies and the telcos. While 224,000 new customers may not sound like a lot, the whole rest of the broadband industry only added 632,000 net customers in the third quarter of last year. T-Mobile has quickly grown to 646,000 total home broadband customers and will soon break into the top ten list of ISPs.

If T-Mobile was the only competitor, there still wouldn’t be much concern from the big companies. But both AT&T and Verizon are getting ready to unleash a nationwide rollout of a fixed wireless product similar to T-Mobile’s. We’re also seeing the rudimentary beginnings of other wireless providers like Starry, which said it plans to grow to 1.4 million customers by 2026. As mentioned earlier, there are millions of lines of fiber being built each year by Frontier, Windstream, Consolidated, TDS, and many other smaller players – all of these ISPs have the cable companies in their crosshairs.

AT&T has thrown down the gauntlet for the cable companies. The cable companies can watch customers erode while waiting for DOCSIS 4.0. Or the cable industry could follow the lead of smaller cable companies like Altice and start converting to fiber now. But unlike AT&T, which will get new revenues to help pay for fiber, the cable companies already have a large majority of customers in most markets. Building fiber will be harder to justify for the cable companies if they are losing customers.

Comcast and Charter still see the lion’s share of the growth of cable customers each quarter. We’ll really know the cable companies are in trouble when we see that metric slip. If everything AT&T says comes to pass, we ought to see cable companies losing customers a few years from now.

Open Wire Telephony

I saw an article recently that reminded me about the early days of telephone technology. The article talks about the barbed wire fences used to bring the first rudimentary communications links to the remote Texas Panhandle.

Telephony using insulated copper wires started to appear in cities in the U.S. in the decades following the invention of the telephone in 1876 by Alexander Graham Bell. Around the turn of the 20th century, engineers developed a technology that could carry telephone signals for a greater distance using large-gauge bare copper wires. The technology involved installing two side-by-side bare wires – one to communicate in each direction.

Engineers discovered that a properly insulated network bare wire would have minimal power loss even over great distances. Bell Long Lines undertook the stringing of open wire copper routes on tall poles to routes between cities. The physical network consisted of bare copper wires connected at each pole to glass insulators. When I broke into the industry in the 1970s, I think every telephone technician and engineer had a green glass insulator sitting on their desk.

The cost of building poles and installing copper wires was expensive, and Bell Long Lines recovered its investment by providing expensive long-distance calls that generated enough revenue to justify building the network. Long-distance rates to call from coast to coast at the start of the 20th Century were around $1 per minute – adjusted for inflation, that’s over $33 per minute in today’s dollars. Only corporations, the very rich, or the government could afford to make long-distance calls in the early days of telephony.

The article talks about how the ranchers at the XIT ranch in Texas got creative and installed the technology using existing strands of barbed wire. The XIT ranch was gigantic and enclosed three million acres of grazing land. To provide context, the northern border of the ranch was 162 miles long. The quality of the connections using barbed wire was expectedly dreadful, but the connections were good enough to notify other parts of the ranch about emergency situations like a grass fire. The article says the ranchers tinkered and improved the quality of the barbed wire network over time by adding insulators along the fence to minimize the wire touching any surfaces that added interference. Early in my career as a consultant, I visited the XIT Rural Telephone Cooperative, and I remember seeing insulators along barbed wire fences. They were no longer used for telephony, and I never made the connection at the time to understand that the fences had been rudimentary telephone lines.

The technology for using open wire networks improved over time with the introduction of open wire carrier which used frequency division multiplexing. First-generation open wire carrier could carry up to 4 simultaneous calls on a pair of wires, and over time the carrier technology improved to carry up to sixteen calls at the same time on pair of open copper wires.

The use of open wire technology for long-haul transmission of telephone calls carried into the 1970s. Rural telephone companies built last-mile telephone systems using the traditional twisted-pair copper wires. But open wire technology was still the preferred way to send telephone signals for longer distances, and there was always one or more routes leaving a rural telephone exchange that used upon wire technology. These routes were always easy to spot due to the insulators.

The technology lasted even longer for railroads who strung long-haul copper networks between stations so they could have free calling. I wouldn’t be shocked if there are still a few of these routes in place along rural railroad spurs in the remote west.

Open wire technology was ultimately replaced by better technologies using traditional copper wiring, microwave radios, and ultimately by fiber. But many rural telephone companies kept the old open wire routes in place as an emergency backup for times when other wires went out of service.

I’m prompted to write blogs about older telco technologies when I occasionally ponder how far the world of communications has come in a relatively short time. We went from open wires to satellites and fiber optics in less than a century – largely thanks to Bell Labs, which constantly pushed the frontiers of how we communicate. I wonder what the folks at the XIT Ranch of 1900 would make of the 10-gigabit fiber routes that likely serve the area today?

FCC to Tackle Data Breaches

The FCC has a new Notice of Proposed Rulemaking (NPRM) concerning an update of customer proprietary network information (CPNI) rules. The FCC wants to strengthen the rules concerning notifying customers of a data breach.

CPNI rules are codified at the FCC from Section 222(a) of the Telecommunications Act of 1996. CPNI rules are intended to protect customer data. For those that haven’t read CPNI rules for a while, Section 222(a) rules state:

Except as required by law or with the approval of the customer, a telecommunications carrier that receives or obtains customer proprietary network information by virtue of its provision of a telecommunications service shall only use, disclose, or permit access to individually identifiable customer proprietary network information in its provision of (A) the telecommunications service from which such information is derived, or (B) services necessary to, or used in, the provision of such telecommunications service, including the publishing of directories.

In plain English, this means that every telecom carrier must take steps to protect customer data that is collected as part of providing a telecommunications service.

There have been a number of well-known data breaches in the industry, and the FCC is proposing to tighten the rules related to notifying customers about data breaches. For example, the current rules give carriers seven days to notify customers of breaches of their personal data, and the NPRM will propose to drastically shorten that time frame. The FCC will also be proposing that carriers must disclose inadvertent breaches of data that were caused by the carrier, as opposed to a malicious outside party. Finally, carriers will be required to report all data breaches to the FCC, the FBI, and the U.S. Secret Service.

For those of you not familiar with the NPRM process, the FCC uses this method to notify the industry of proposed changes in regulations. An NPRM spells out the specific proposed rule changes by showing the proposed change in FCC rules. The FCC then invites comments on the proposed rule changes and often asks additional questions to get feedback. The FCC sometimes adopts the NPRM as proposed but often modifies the proposed rules based upon the comments received.

It doesn’t seem likely that the FCC will allow an opt-out of these rule changes for small carriers and these rules are likely to apply to everybody, like the current CPNI rules.

As is usual these days, there is a regulatory twist. As it sits today, the FCC no longer regulates broadband since it is not classified as a telecommunications service. The Section 222 rules only apply to telecommunications carriers and the new rules might only apply to carriers that offer traditional telephone service, cellular services, or anything else remaining under FCC jurisdiction. An ISP that only provides broadband might be exempt from CPNI rules – although you could face an expensive legal fight if the FCC sees it otherwise. An awful lot of our regulatory rules are sitting in the gray areas these days.

However, if the FCC eventually brings broadband back into the regulatory fold, as is expected, then these rules would apply to all ISPs selling broadband services.

No Home Broadband Option

We spend a lot of time arguing policy questions, such as asking if 25/3 Mbps is adequate broadband. What policymakers should really be talking about are the huge numbers of homes with dreadful broadband. The worst thing about the deceptive FCC maps is that they often give the perception that most rural areas have at least some broadband options when many rural residents will tell you they have no real broadband options.

Policymakers don’t grasp the lousy choices in many rural areas. The FCC maps might show the availability of DSL, but if it’s even available (often it’s not), the speeds can be incredibly slow. Rural households refuse to pay for DSL that might deliver only 1 or 2 Mbps download and practically no upload.

I think the FCC assumes that everybody has access to satellite broadband. But I’ve talked to countless rural residents who tried satellite broadband and rejected it. Real speeds are often much slower than advertised speeds since trees and hills can quash a satellite signal. The latency can be crippling, and in places where the speeds are impaired, the high latency means a household will struggle with simple real-time tasks like keeping a connection to a shopping site. Satellite plans also come with tiny data caps. I’d like to put a few Washington DC policymakers on a monthly data plan with a 40 GB or 60 GB cap so they can understand how quickly that is used in a month. But the real killer with satellite broadband is the cost. HughesNet told investors last year that its average revenue per customer was over $93 per month. Many rural homes refuse to pay that much for a broadband product that doesn’t work.

We hear a lot of stories about how fixed wireless technology is getting better to the point where we’re hearing preposterous conversations about bringing gigabit fixed wireless to rural areas. There are still a lot of places with woods and hills where fixed wireless is a poor technology choice. I worked with one county recently that gathered thousands of speed tests for fixed wireless that showed average download speeds under 5 Mbps and upload speeds below 1 Mbps. There are still a lot of WISPs that are cramming too many customers on towers, chaining too many towers together with wireless backhaul, and selling to customers who are too far from towers. This is not to say that there aren’t great WISPs, but in too many rural places the fixed wireless choices are bleak.

Rural residents have also suffered with cellular hotspots. These are the plans that cellular companies have had for years that basically price home broadband at the same prices and data caps as cellular broadband. During the pandemic, I’ve heard from families who were spending $500 to $1,000 per month in order to enable home-schooling during the pandemic. This product is not available in huge parts of rural America because of the poor or nonexistent cellular coverage. We complain about the FCC’s broadband maps, but those are heads and tails better than the cellular company coverage maps which massively overstate rural cellular availability.

There is some relief in sight for some rural homes. I recently talked to farmers who are thrilled with the T-Mobile fixed cellular product – but they said distance from cell sites is key and that many of their neighbors are out of range of the few cell sites found in most rural counties. There are rural folks who are happy with Starlink. But there are a lot of people now into the second year on the waiting list to get Starlink. Starlink also has reported problems with trees and hills and also comes with a steep $99 per month price tag.

When a rural household says they have no broadband connection, I’ve learned that you have to believe them. They will have already tried the DSL, fixed wireless, satellite, and cellular hotpots, and decided that none of the options work well enough to justify paying for them. The shame is that the FCC maps might give the impression that residents have two, three, or four broadband options when they really have none.

Video Continues to Drive Broadband Usage

Nielsen recently published some statistics about the way that we watch video that shows a continuing trend of migration from traditional video to watching video online.

One of the most striking statistics is the total volume of online video. December 2021 saw an aggregate of 183 billion minutes of online video viewing. And even that number is likely small since there are many uses of video on the web that are not likely counted in the total. The prior largest months for video volumes was 178 billion minutes in November 2021 and 160 billion minutes in March 2020, the first month of the pandemic.

Here is a comparison of the video usage by category. Cable means video delivered by a wired cable provider or from a satellite service like DirecTV. Streaming is all sources of online content like Netflix. Broadcast is watching video using an antenna. Other is an all-inclusive category that includes things like gaming, DVDs, and video on demand.

May 2021 Dec 2021
Cable 39% 37%
Streaming 26% 28%
Broadcast 25% 26%
Other 9% 9%

Nielsen has been tracking these numbers for many years. There has been a steady migration from traditional cable viewing to both streaming and broadcast viewing as millions of homes are dropping traditional cable each year. The other category has also been growing, fueled by the explosive growth of online gaming.

Nielsen also reported on the market share of each of the major online video services. The following percentages represent the share that each service has of all online video content. It’s impressive to see that 6.4% of all video content is delivered by Netflix. It’s also impressive to see Disney+ grow so large after having just launched at the end of 2019.

Netflix 6.4%
YouTube 5.8%
Hulu 3.0%
Prime Video 2.1%
Disney+ 1.6%
All Other 8.8%

The reason I’m writing about this topic is a reminder of how many minutes of video usage are still carried by traditional cable TV and by broadband TV. In December there were 242 billion minutes of content delivered by traditional cable TV and another 170 billion minutes delivered by broadcast using antennas. Over time, more and more of these minutes are going to migrate online and is one of the primary driving factors behind the continued explosive growth where broadband networks have been seeing a doubling of overall bandwidth used about every three years. The trends identified by Nielsen mean that ISPs can’t expect any break in that growth for the foreseeable future, likely well into the next decade.

Auditing RDOF Performance

Today’s blog covers an issue that gets my blood boiling every time I think about it. The FCC just announced increased testing for ISPs accepting funding from FCC High Cost programs, which includes CAF II and RDOF. The new rules include the following:

  • The number of audits and verifications will double in 2022 compared to 2021 and will include some on-site audits.
  • There will be more verification prior to the first required deployment milestone.
  • Large dollar funding recipients will be subject to an on-site audit in at least one state.
  • High-risk recipients will be subject to additional audits and verifications.
  • Audit results, speed tests, and latency performance will now be posted online.

That all sounds good until you look at the practical results of the testing program. The worse that can happen to an ISP for failing the FCC tests will be to lose some small portion of any remaining funding.

Under current rules, ISPs can choose between three methods for testing. They may elect what the FCC calls the MBA program, which uses an external vendor approved by the FCC to perform the testing. ISPs can also use existing network tools if they are built into the customer CPE that allows test pinging and other testing methodologies. Finally, an ISP can install ‘white boxes’ that provide the ability to perform the tests. What’s not easy to dig out of the rules is that ISPs have a hand in deciding who gets tested.

In the past, the number of required tests was as follows. For award areas with 50 or fewer households the test was for 5 customers; for 51-500 households the test was 10% of households. For 500 or more households the test was 50 households. ISPs declaring a high latency had to test more locations with the maximum being 370. Doubling the testing probably means doubling the number of locations that are tested.

Tests for a given customer are done for a full week each quarter. Tests must be conducted in the evenings between 6:00 PM and 12:00 PM. Latency tests must be done every minute during the six-hour testing window. Speed tests, run separately for upload speeds and download speeds,  must be done once per hour during the 6-hour testing window.

ISPs are expected to meet latency standards 95% of the time. Speed tests must achieve 80% of the expected upland and download speed 80% of the time. An example of this requirement is that a carrier guaranteeing a gigabit of speed must achieve 800 Mbps 80% of the time. ISPs that meet the speeds and latencies for 100% of customers are excused from quarterly testing and only have to test once per year.

The real kicker of all of this is that the penalties for failing the tests have no teeth. The following financial penalties are applied only to the remaining subsidy payments:

  • If between 85% and 100% of households meet the test standards, the ISP loses 5% of any remaining FCC support.
  • If between 70% and 85% of households meet the test standards, the ISP loses 10% of future support.
  • If between 55% and 75% of households meet the test standards, the ISP loses 15% of future FCC support.
  • If less than 55% of households meet the test standard, the ISP loses 25% of their future support.

The penalties for an ISP that doesn’t perform on RDOF are minor. Consider a WISP that accepted $100 million of RDOF to build gigabit wireless but only delivers a few hundred Mbps speeds. The first chance for testing is in the third year of RDOF, where an ISP is required to have completed 40% of the buildout. My example WISP will fail more than 55% of speed tests and will incur the maximum FCC penalty. That means the ISP will collect $10 million in the first two years and $7.5 million in years 3 – 10. By the end of the 10-year payout, the ISP will still have collected $80 million of the original $100 million RDOF award. That is not much of a penalty for massive underperformance.

I think these weak penalties emboldened ISPS to lie about the speeds of their technologies in the RDOF auction. ISPs are still paid handsomely even if they don’t come close to meeting the promised speeds. And that’s not the entire story. There were bidding penalties for ISPs promising speeds slower than gigabit. A WISP that told the truth about speeds in the auction (and many did) likely lost in the auction if bidding directly against a WISP that exaggerated speeds.

These penalties are shameful and are another example of the FCC favoring ISPs over the public.  If an ISP whiffs the test in the third year they should stop receiving all future subsidies. If an ISP fail the tests badly enough, such as delivering 200 Mbps when promising a gigabit, then they ought to be forced to return 100% of the previous RDOF awards. If those were the rules, any ISPs that lied about speed capabilities would all withdraw from RDOF tomorrow.

People will ask why it’s so bad that an ISP that overstated speed capabilities won the RDOF. This cheats the people living in the RDOF award area. Residents thought they would get gigabit broadband and will get something far less. While customers might be pleased with the initial speeds in this example, the network being built is not ready to provide good broadband for the rest of the century. There is a good chance that in a decade or two we’ll be looking at these same award areas again and asking if these areas need more federal subsidy to swap out to a faster technology.

If an ISP takes big federal money and fails to perform there should be real penalties. If that was made clear upfront, then ISPs that can’t meet speed requirements would not be tempted to apply. One only has to look back at CAF II to see how ineffective this testing is. Does anybody remember any big penalties for the big telcos not upgrading DSL?

Employees Favor Working from Home

USA Today reported on the results of the fifth annual survey of the State of Remote Work conducted by Owl Labs and Global Workplace Analytics. The nationwide survey was done last summer at a time when almost one-fourth of workers continued to work at least part-time from home.

The survey showed a strong desire of employees to work from home, at least part-time. Here are a few of the most interesting findings from the survey:

  • A little more than half of all employees would choose to work full-time from home. 74% of those interviewed said that working at home made them happier.
  • Almost half of workers said they would take a 5% pay cut to continue to work remotely, at least part of the time.
  • 91% of those working at home say they are as productive or more productive than when in the office. 55% say they work more hours at home than when they are in the office.
  • Almost one-fourth of employees said they would quit their jobs if they aren’t allowed to work remotely. For context, this survey was done at a time when employees were quitting jobs at historic rates.
  • A lot of employees changed jobs during the pandemic. 90% of them were looking for a better career. 88% also wanted a better work-life balance. 87% were looking for less stress. 84% wanted more flexibility for where they work, and 82% wanted more flexibility of when they work.
  • A lot of people relocated during the pandemic, which was made easier through working from home. Two-thirds of employees who relocated were between the ages of 26 and 40. Interestingly to those reading this blog, 63% of employees who moved from urban areas to rural areas were in this age group. More than half of those that moved from suburban to rural areas also were in the younger age group.

This survey shows similar results to other surveys taken over the last few years. It seems that many people got a taste of working from home and decided that they like it more than going to the office every day. A lot of employers are starting to demand that workers return to the office, and many have been reporting a mass exodus of employees who don’t wish to come back.

This has a lot of implications for rural and suburban communities. Many people want to get away from the stress of urban life and lead a more relaxing lifestyle – but they need good broadband to do so. Remote workers don’t want so-so broadband, but reliable broadband that means they can always connect remotely as needed. 56% of younger workers said they would love to incorporate virtual reality and virtual meetings into the workday – something that will require fast upload and download speeds.

From an economic development perspective, work-from-home employees are a huge boon to a rural community that has likely been aging and slowly shrinking over time. Employees making good salaries can provide a huge boost to a local economy. For years, rural communities have sunk big tax incentives into trying to attract new employers. It probably costs a lot less to attract one hundred remote workers than to lure a traditional employer that will bring a hundred jobs.

I have rural clients that operate rural fiber networks who tell me that their communities are seeing a new demand for building new homes and that housing prices are increasing as people want to move to the community.

This presents an interesting challenge to rural communities wondering how to get the word out to prospective work-from-home employees. This new trend is a 180-degree turn from traditional economic development efforts – but communities that master this ought to grow and thrive and bring fresh breath into aging communities.

Jitter – A Measure of Broadband Quality

Most people have heard of latency, which is a measure of the average delay of data packets on a network. There is another important measure of network quality that is rarely talked about. Jitter is the variance in the delays of signals being delivered through a broadband network connection. Jitter occurs when the latency increases or decreases over time.

We have a tendency in the industry to oversimplify technical issues. We take a speed test and assume the answer that pops out is our speed. Those same speed tests also measure latency, and even network engineers sometimes get mentally lazy and are satisfied to see an expected latency number on a network test. But in reality, the broadband signal coming into your home is incredibly erratic. From millisecond to millisecond, the amount of data hitting your home network varies widely. Measuring jitter means measuring the degree of network chaos.

Jitter increases when networks get overwhelmed, even temporarily. Delays are caused in any network when the amount of data being delivered exceeds what can be accepted. There are a few common causes of increased jitter:

·         Not Enough Bandwidth. Low bandwidth connections experience increased jitter when incoming packets exceed the capacity of the broadband connection. This effect can cascade and multiply when the network is overwhelmed – being overly busy increases jitter, and the worse jitter then makes it even harder to receive incoming packets.

·         Hardware Limitations. Networks can bog down when outdated routers, switches, or modems can’t fully handle the volume of packets. Even issues like old or faulty cabling can cause delays and increase jitter.

·         Network Handoffs. Any network bottlenecks are the most vulnerable point in the network. The most common bottleneck in all of our homes is the device that converts landline broadband into WiFi. Even a slight hiccup at a bottleneck will negatively impact performance in the entire network.

All of these factors help to explain why old technology like DSL performs even worse than might be expected. Consider a home that has a 15 Mbps download connection on DSL. If an ISP were to instead deliver a 15 Mbps connection on fiber, the same customer would see a significant improvement. A fiber connection would avoid the jitter issues caused by antiquated DSL hardware. We tend to focus on speeds, but a 100 Mbps connection on a fiber network will typically have a lot less jitter than a 100 Mbps connection on a cable company network. Customers who try a fiber connection for the first time commonly say that the network ‘feels’ faster – what they are noticing is the reduced jitter.

Jitter can be deadliest to real-time connections – most people aren’t concerned about jitter if means it takes a little longer to download a file. But increased jitter can play havoc with an important Zoom call or with maintaining a TV signal during a big sports event. It’s easiest to notice jitter when a real-time function hesitates or fails. Your home might have plenty of download bandwidth, and yet broadband connections still fail because small problems caused by jitter can accumulate to make the connection fail.

ISPs have techniques that can help to control jitter. One of the more interesting ones is to use a jitter buffer that grabs and holds data packets that arrive too quickly. It may not feel intuitive that slowing a network can improve quality. But recall that jitter is caused when there is a time delay between different packets in the same transmission. There is no way to make the slowest packets arrive any sooner – so slowing down the fastest ones increases the chance that Zoom call packets can be delivered evenly.

Fully understanding the causes of jitter in any specific network is a challenge because the causes can be subtle. It’s often hard to pinpoint a jitter problem because it can be here one millisecond and gone the next. But it’s something we should be discussing more. A lot of the complaints people have about their broadband connection are caused by too-high jitter.

FCC Proposes Broadband Labels

The FCC recently issued a Notice of Proposed Rulemaking (NPRM) in CG Docket No. 22-2 that asks questions related to the requirement that ISPs provide broadband labels. This new requirement was created by the Investment Infrastructure Investment and Jobs Act. The Act requires that ISPs must display broadband consumer labels and disclose information to consumers about broadband service plans.

The FCC has taken the directive from Congress and turned it into a proposed set of regulations. ISPs and consumers will have a short 30-day window to file comments on the FCC’s suggested rules. Note that for most NPRMs, the final rules largely follow the proposed rules unless the industry makes a compelling case in comments that sway’s the FCC’s thinking.

Following are a few key aspects of the proposed new regulations:

  • This would apply seemingly to all ISPs – those that deliver broadband through wired or wireless connections. I assume that wireless includes satellite ISPs.
  • The disclosures of broadband information to customers would be summarized in a broadband label similar to ones used for food products. The FCC contemplated labels in 2016 as part of the order that required net neutrality, but when net neutrality was killed, the label requirement was also killed. The FCC is proposing using the same labels that were created in 2016.
  • The labels include a detailed description of the broadband product. ISPs must report broadband speeds. ISPs must report prices, and if a customer has a promotional price they must disclose when that price will expire and the regular price that will be charged. ISPs have to include information on data caps or any other restrictions on broadband usage. ISPs also must disclose network management practices and disclose latency and jitter.

The NPRM asks how the FCC will be able to judge the accuracy of labels. To me, this is the key question in the docket, and there is no easy answer. Many ISPs are not going to want to tell the truth about their products since the labels allow for easy side-by-side comparison between competitors. Probably an even more important question is how the FCC can enforce accuracy and how the agency might discipline ISPs that won’t disclose accurate labels. Food manufacturers that lie on the label can be forced from the market, but it’s not that easy to discipline ISPs, except perhaps with monetary damages. Remember that for now, this FCC has little direct authority over ISPs. The FCC will be in charge of these labels, but the agency doesn’t regulate much else related to broadband.

If done accurately, the labels should allow a consumer to quickly identify the difference between two ISPs. This has to scare any ISP competing against a fiber network. The fiber network will show symmetrical speeds, lower latency, and likely pricing with no gotchas. I have to think that cable companies, DSL providers, and some WISPs do not want these labels.

There are a lot of practical concerns about the labels. How does an ISP report on a technology that doesn’t deliver the same broadband speeds and quality everywhere in a market? DSL speeds vary from customer to customer, and even a next-door neighbor can have a drastically different DSL experience. Will a big telco that is still reporting rural DSL speeds of 25/3 Mbps come clean with customers on the actual expected speed?

Most people probably don’t realize it, but the speeds delivered by many cable companies can also differ significantly from neighborhood to neighborhood. In most cities, we find some neighborhoods where a cable company has a clean network that nails the desired speeds and latency. But just blocks away might be a neighborhood with network problems that the cable company has not addressed where speeds are far lower than expected. Are cable companies going to reveal this to customers in the neighborhoods with poor performance? I’ll be shocked if they do.

Fixed wireless broadband also can deliver a wide range of customer experiences. A customer close to the tower with a perfect line-of-sight will get the best speeds, while a host of issues like distance and impediments in the environment can slow down speeds for other customers. Will a WISP that is serving customers past the recommended range of the radios really come clean with a potential customer about how lousy the broadband will be?

If the FCC implements this rule without some way to police it, the big ISPs will continue to tell customers the story they want them to hear instead of the truth. But it sounds impossible for FCC to monitor the whole country to see if ISPs are reporting the truth. The FCC might consider some sort of customer feedback process like they are planning with the FCC maps. But I can’t see the FCC getting bogged down in dealing with hundreds of thousands or even millions of complaints about the labels.

We also can’t forget that the ISP is not always the reason for a poor broadband experience. There are millions of customers that insist on using the cheap WiFi router they bought a decade ago that is killing the broadband speeds inside the house. The FCC is never going to get involved deeply enough to know if an ISP or the customer is at fault.

The labels sound like a good idea, and if ISPs are even only partially truthful, the labels will highlight the difference between competitors. But I can’t imagine any set of requirements that will suddenly force a bad actor ISP to tell the unvarnished truth.

The Challenge of State Broadband Plans

One of the most interesting aspects of the upcoming BEAD (Broadband Equity, Access, and Deployment) grants is that the money is going to flow through the states. In many of the states I’ve been following, it looks like the money will be distributed by passing the money through existing state broadband grant programs.

States with existing broadband grant programs are going to find that some aspects of the BEAD grants will differ from the current state grant rules. A good example is in the definition of unserved and underserved – the BEAD definition is going to be different than the definitions used in almost every existing state grant program. Since the federal legislation that created the BEAD grants rules is so specific, there will be numerous ways that the BEAD grant will differ from a state grant program.

The obvious solution is for states to adopt the federal rules. There is certainly a huge amount of incentive for states to make this work since every state will get at least $100 million while most will get much more. The $42.5 billion in funding averages out to $850 million per state.

Unfortunately, it’s not automatic that states can or will accept the federal grant rules since the grant rules in many states were prescribed by the state legislatures. In such cases, the legislatures will have to have to take steps to modify the state grant rules. Anybody who has ever worked with a legislature on broadband issues knows that this won’t always be a cakewalk. The specific rules baked into existing state broadband grants are often the results of difficult wrangling and negotiations in the legislatures.

The politics of broadband varies widely from state to state. There are states where the broadband grant rules are heavily influenced by the large incumbents, while other state grant rules are more consumer-oriented. I imagine there are state legislators who are going to bristle at the idea of having to swallow grant rules handed down by the federal government.

Add to this the fact that states have a time clock running. The NTIA will publish final rules for the state to follow by May 16, and then states will have a window to file state plans. That timeline means that legislatures need to agree with changes in grant programs before a state can file a BEAD broadband plan. For some legislatures, this will feel like acting at the speed of light. I’m positive that there will be states where accepting the revised grant rules will be heavily debated.

Perhaps a bigger challenge comes from the states without a formal broadband grant office. These states must first decide who in the state is going to tackle writing the state plan to obtain the grant funding, and then it’s still likely that many of these states will still need the legislature to act. For a state without any staff dedicated to the broadband issue, the process of getting a state plan filed with the NTIA sounds like a serious administrative challenge.

The process doesn’t end when a state files a broadband plan with the NTIA, since the NTIA will have to approve each plan. It’s going to get interesting if the NTIA disagrees with the provisions of a state plan. It’s not inconceivable that a state legislature might have to get involved a second time if a filed state plan has to be modified.

It’s not unusual for the states and the federal government to wrangle over the details of federal spending programs. There are numerous examples of states turning down huge amounts of federal money when states didn’t like the rules attached to the funding.

Because of the prominence of the rural broadband issue in most states, it’s hard to think states won’t do whatever is needed to get the BEAD grant funding. But the state legislative process is not always logical, and there will likely be a few states where legislative intransigence could put the grant money at risk.

I strongly urge ISPs and local governments in every state to take the time to find out what your state is doing. Many states are currently inviting comments and involvement in the creation of the state broadband plan for these grants. This is the time to make sure your state is doing this the right way.