Grants for Low-Income Apartments

There is one section of the $42.5 billion Broadband Equity, Access, and Deployment  (BEAD) grants that cities should find interesting. These grants can be used for installing internet and Wi-Fi infrastructure or providing reduced-cost broadband within a multi-family residential building, with priority given to a residential building that has a substantial share of unserved households or is in a location in which the percentage of individuals with a household income that is at or below 150 percent of the poverty line applicable to a family of the size involved (as determined under section 673(2) of the Community Services Block Grant Act (42 U.S.C. 9902(2)) is higher than the national percentage of such individuals.

The BEAD grants are mostly aimed at solving the rural digital divide, but this is an open invitation for cities to seek grant funding to bring better broadband to low-income apartment complexes.

As is usual with most new laws, this one has one interesting incongruity. The BEAD grants establish a priority for States to follow – States should first use BEAD grants to bring broadband to unserved locations with broadband under 25/3 Mbps, then underserved locations with broadband slower than 100/20 Mbps, and finally to anchor institutions. My reading of the language is that serving low-income housing shares top priority along with rural unserved locations – the language says that grants can be used for unserved apartment buildings OR for low-income apartment buildings. This language seemingly gives low-income apartment buildings a higher priority than underserved locations. This language also implies that there is no speed requirement for low-income apartments to qualify for grant funding – the only requirement is the level of poverty.

It’s going to be interesting to see how States interpret this. States with big cities could see huge demand for broadband grants from cities that see this as the chance to solve the urban digital divide. I know that $42.5 billion is a lot of money, but it’s not going to stretch as far as Congress might have believed if every major city sees this as a chance to bring fiber to low-income neighborhoods.

The language is interesting in that it allows for bringing either Wi-Fi or reduced-cost broadband. The term Wi-Fi suggests what I call centralized Wi-Fi that floods hallways and common areas in apartment buildings. It’s a nice thing to have, but it is not the future-looking broadband that is needed for the next twenty years. I’d hate to see a lot of grants asking to install Wi-Fi instead of bringing real broadband to apartment units.

Bringing broadband to apartments will require an ISP. That could be almost anybody under the BEAD grants. Cities could be the ISP in a state that allows municipal ISPs. Cities could partner with the large incumbent ISPs or with smaller commercial ISPs. The most interesting idea is to partner with a non-profit ISP. It would even be possible for cities to hand these networks off to an urban cooperative. Anybody interested in the last two possibilities needs to be moving quickly to have the non-profit or cooperative formed by the time the grant requests are filed in a year.

A year is not a lot of time for cities to capitalize on this possibility. The specific apartments to be served should be identified. Somebody has to design and price out a technical solution. A city will have a better chance of winning funding if it has identified the ISP partner. And cities need to get active over the next few months to make sure that States build this option into the broadband plan that must be approved by the NTIA.

This $42.5 billion grant program is extraordinary in its size and scope – and it’s a once-in-a-lifetime chance to solve persistent broadband gaps. Cities need to marshal their resources quickly to make this happen because there probably won’t be another funding program for a long time aimed at solving the urban digital divide.

Improving Agriculture Tech

The typical farmer must make critical decisions each year on the crops to plant, when to plant them, how to best fertilize, how and when to water and weed, when to harvest, and how to best sell finished crops. Failing at even one of these decision points can ruin a crop year. Farmers also increasingly want to use farming practices that strengthen the soil rather than deplete it. Modern farming is a complex business, and a farmer has to annually interface with seed companies, equipment makers, chemical companies, crop distributors, temporary laborers, banks, insurance companies, and the government.

There has been a lot of effort made over the last decade to develop technology solutions to help farmers with some of the major decisions. For example, there are now ways to use aerial photography to diagnose the conditions of each section of a field. We’re in the early stages of developing sensors that will report on everything from moisture content to nutrient levels. Farmers can now buy a dizzying array of smart tractors and other smart farming equipment.

Unfortunately, the new hardware and software solutions have brought a new dilemma to the typical farmer – how to use these new tools for the specific local conditions at a given farm. Farmers suddenly find themselves juggling a dozen different pieces of software that don’t work together. There is no easy way to transfer data between different software systems. Software that might help a corn farmer likely won’t work as well for a farmer growing sweet potatoes or tomatoes. Farmers are complaining that they need to hire a systems analyst just to make sense of all of the new tools.

The Linux Foundation has begun a new open source software project to try to integrate the many software challenges suddenly confronting farmers. Labeled as the AgStack Foundation, the new effort will solicit input from across the industry, from farmers, equipment manufacturers, academics and researchers, and the government. The stated goal of the foundation is “to improve global agriculture efficiency through the creation, maintenance, and enhancement of free, re-usable, open and specialized digital infrastructure for data applications”.

The Linux Foundation is a non-profit consortium that has tackled other complex challenges like software for self-driving cars, wireless networks, and security systems. The Linux approach to software development is to create an open core platform of middleware that is made available to everybody. The Linux goal is to create a platform that can take inputs from multiple ag software packages and integrate the data exchange between products in a way to feel seamless to farmers. For example, data gathered from a smart tractor would be made easily available to software used for other purposes.

This integrated software approach makes it even more vital that farms get good broadband. The AgStack software is going to have to operate in the cloud, and a farm needs a robust broadband connection to load data to and from the cloud. Even when farms have adequate download broadband speeds, many have terrible upload connections.

The early members and contributors to the effort include the Hewlett Packard Enterprise (HPE), Purdue University/OATS & Agricultural Informatics Lab, the University of California Agriculture and Natural Resources (UC-ANR), and FarmOS. As the initiative gets legs and starts producing results, it seems likely that almost every maker of Ag software will want to integrate into this new platform. Joining this effort will enhance the value of every software package.

Agriculture is our largest industry, but because farms come in many sizes and produce a huge variety of crops or livestock, it’s an amazing challenge to develop standard software in the industry that can be useful to more than a small percentage of the market. This effort can hopefully bring the entire software ecosystem together while operating behind the scenes and not being evident to the farmers who will benefit the most.

Cord Cutting Stays on Pace 3Q 2021

The largest traditional cable providers collectively lost over 1.3 million customers in the third quarter of 2021 – an overall loss of 1.9% of customers.

The numbers below come from Leichtman Research Group which compiles these numbers from reports made to investors, except for Cox which is estimated. The numbers reported are for the largest cable providers, and Leichtman estimates that these companies represent 95% of all cable customers in the country.

Following is a comparison of the third quarter subscriber numbers compared to the second quarter of this year:

3Q 2021 Change % Change
Comcast 18,549,000 (407,000) -2.1%
Charter 15,891,000 (121,000) -0.8%
AT&T 15,000,000 (412,000) -2.7%
Dish TV 8,424,000 (130,000) -1.5%
Verizon 3,714,000 (68,000) -1.8%
Cox 3,460,000 (70,000) -2.0%
Altice 2,803,000 (67,500) -2.4%
Mediacom 590,000 (21,000) -3.4%
Frontier 400,000 (23,000) -5.4%
Atlantic Broadband 360,000 (6,000) -1.6%
Cable One 279,000 (8,000) -2.8%
Total 69,470,000 (1,333,500) -1.9%
Total Cable 41,932,000 (700,500) -1.6%
Total Other 27,538,000 (633,000) -2.2%

Some observations about the numbers:

  • The big loser continued to be AT&T, which lost a net of 412,000 traditional video customers between DirecTV and AT&T TV. While AT&T spun off this business it still retains a controlling share.
  • The big percentage loser continues to be Frontier which lost 5.4% of its cable customers in the quarter.
  • Comcast lost almost as many cable customers as AT&T at 407,000.
  • Charter continues to lose cable customers at a slower pace than the rest of the industry and is the only large company that lost less than 1% of its cable customers in the quarter.
  • This is the tenth consecutive quarter that the industry lost over one million cable subscribers.
  • These companies have collectively lost 4,250,000 customers this calendar year.

To put these losses into perspective, this drops the nationwide penetration rate for traditional cable to about 56%. These same large companies had over 85.4 million cable customers at the end of 2018 and 79.5 million by the end of 2019. That’s a loss of almost 16 million customers since the end of 2018.

The big losses in cable subscribers are happening at the same time that the biggest ISPs are adding a lot of broadband customers – the biggest ISPs added 630,000 new broadband subscribers in the third quarter of 2021.

Many customers leaving traditional cable are migrating to online programming surrogates. For the quarter, Hulu plus Live TV, Sling TV, and FuboTV collectively gained 680,000 customers.

Voice over New Radio

I cut my teeth in this industry working with and for telephone companies. But telephone service is now considered by most in the industry to be a commodity barely worth any consideration – it’s just something that’s easy and works. Except when it doesn’t. Cellular carriers have run into problems maintaining voice calls when customers roam between the new 5G frequency bands and the older 4G frequencies.

Each of the cellular carriers has launched new frequency bands in the last few years and has labeled them as 5G. The new frequency bands are not really yet 5G because the carriers haven’t implemented any 5G features yet. But the carriers have implemented the new frequencies to be ready for full 5G when it finally arrives. The new frequencies are operated as separate networks and are not fully integrated into the traditional cellular network – in effect, cellular companies are now operating two side-by-side networks. They will eventually launch true 5G on the new frequencies and over time will integrate the 4G networks with the new 5G networks. It’s a smart migration plan.

The cellular carriers are seeing dropped voice calls when a customer roams and a voice connection is handed off between the two networks. Traditionally, roaming happened when a customer moved from one cellular site to a neighboring one. Roaming has gotten more complicated because customers can now be handed between networks while still using the same cell site. The coverage areas of the old and new frequencies are not the same, and customers roam when moving out of range of a given frequency or when hitting a dead spot. The most acute difference in coverage is between 4G coverage and the area covered by millimeter-wave spectrum being used in some center cities.

It turns out that a lot of telephone calls are dropped during the transition between the two networks. There has always been some small percentage of calls that get dropped while roaming, and we’ve each experienced times when we unexpectedly lost a voice call – but the issue is far more pronounced when roaming between the 5G and 4G networks.

The solution that has been created to fix the voice problems is labeled as Voice over New Radio (VoNR). The technology is bringing an old concept to the 5G networks. ISPs like cable companies and WISPs process IP voice calls through an IP Multimedia Core Network Subsystem (IMS). The IMS core used standard protocols like SIP (Session Initiation Protocol) to standardize the handoff of IP calls so that calls can be exchanged between disparate kinds of networks.

VoNR packetizes the media layer along with the voice signal. This embedded system means that a call that is transferred to 4G can quickly establish a connection with voice over LTE before the call gets dropped. This sounds like a simple concept, but on a pure IP network, it’s not easy to distinguish voice call packets from other data packets. That alone causes some of the problems on 5G because a voice call doesn’t get priority over other data packets. If a 5G signal weakens for any reason, a voice call suffers and can drop like any other broadband function. We barely notice when there is a hiccup when web browsing or watching a video, but even a quick temporary hiccup can end a voice call.

The new technology brings a promise of some interesting new functions to 5G. For example, it should be possible in the future to prioritize calls made to 911 so that they can’t be dropped. The new technology also will allow for improved voice quality and new features. For example, with 5G, there is enough bandwidth to create a conference call between multiple parties without losing call quality. This should also allow for establishing guaranteed voice and music connections while gaming or doing other data-intensive functions.

As an old telco guy, it’s a little nostalgic to see engineers working to improve the quality of voice. Over the last decades, we’ve learned to tolerate low-quality cellular voice connections, and we’ve mostly forgotten how good the connections used to be on our old black Bell rotary-dial phones. This isn’t one of the touted benefits of 5G, but perhaps Voice over New Radio can bring that back again.

Why I am Thankful for 2021

Every year I write a blog at Thanksgiving talking about the things in our industry for which I am thankful. Most years, this is easy because there are always a lot of great things happening in the broadband industry. But 2021 has been hard for a lot of the folks in the broadband industry. I was deeply touched this year by many of the stories I heard during the pandemic. I heard from a rural high school principal who was upset because 9% of the students in his school disappeared when learning went virtual. I talked to a librarian who was distraught watching students sit outside her library in the snow all day to keep up with virtual schoolwork.

Please feel free to comment at the end of this blog about events this year in the broadband industry for which you are thankful.

ARPA Grants Provide Local Solutions. The ARPA grants given directly to local governments are a breath of fresh air in an industry where only carriers have been able to get government funding for broadband projects. Towns, cities, and counties can use this money to solve the most pressing broadband problems in their community – rural residents with no broadband options, low-income neighborhoods that have been left behind, or retail shopping districts that ISPs have ignored. It’s too bad that it took a pandemic to try the idea of empowering local communities to tackle the problems that are specific to their communities.

Finally, an Emphasis on the Digital Divide. The recently enacted Infrastructure Investment and Jobs Act (IIJA) created two new grant programs to address digital equity and inclusion. Together these provide $2.75 billion in grants to tackle the digital divide. We’ve been talking about the digital divide for at least fifteen years, and this is the first significant effort to try to include everybody in the digital economy. It’s hard to imagine being digitally illiterate in today’s economy since so much of daily life is now online.

Technologies Continue to Improve. As I look around the industry, I can see that every broadband technology is getting better. There are scientists and engineers continuing to improve the performance and speeds of the technology that fuels our broadband. We’re seeing a new generation of fiber PON, better fixed wireless radios, a new generation of DOCSIS, and better cellular technology. These will all fuel better broadband.

Improved Cellular Speeds. This might seem like an odd thing to be thankful for. But much of the country saw big leaps in cellular data speeds in 2021 as the cellular carriers launched the new spectrum bands being labeled as 5G. The immediate impact of the upgrades is a giant leap in bandwidth, which makes life easier for a lot of homes without a landline broadband connection. More importantly, the major cellular carriers are all launching unlimited-usage broadband plans using the new cellular spectrum. The new spectrum will enable functional broadband in rural homes close enough to a cell tower. In cities, the faster cellular offers an affordable broadband alternative to those who can’t afford cable company broadband.

Infrastructure Funding? I was a little hesitant to put this on the list. I think we’re going to have to wait for a decade to find out if throwing the huge sum of $42.5 billion at rural broadband is really going to work. I have no doubt that this will make broadband better for huge parts of the country. But I also worry that much of the money will go to the projects that fail or to the giant ISPs who will not be the good stewards of the funding in rural areas. I’m also doubtful about the FCC being able to improve the mapping quickly enough so that communities are not left behind by this funding. Since this is once-in-a-generation funding, I’m happy for the millions of folks who will get better broadband but worried about those won’t.

It’s a Good Year to be a Consultant. I’m not going to kid you, it’s a nice year to be a consultant and to be in high demand. But it’s also a frustrating year. I’ve had to say no to many projects where I know I could have provided the solution clients are seeking. There have been recent weeks when a dozen potential projects came to my attention – and it’s frustrating to have to say no.

My Thoughts on the BEAD Grants

I’ve had some time to think about the $42.5 BEADA grants that will infuse a huge amount of money into building broadband networks. I summarized the most important rules in an earlier blog, and today follows up with some observations and predictions about how these grants will probably work.

Not the Same Everywhere. These grants will be awarded through the states. The NTIA will set the overall guidelines, but it’s inevitable that states will have a huge say in who wins the grants. If a state is determined to give these grants to giant ISPs, that state will be able to maneuver within the rules to do so – as will states that don’t want to fund big ISPs. States will definitely put their own stamp on who gets the funding.

Mostly for Fiber. WISPA and other trade associations lobbied hard to set the speed requirement for new grant-funded technology to 100/20 Mbps. This makes fixed wireless and cable company HFC networks eligible for grant funding. This might have been a hollow victory, and I believe that most states are going to give a huge preference to building fiber and will be hesitant to award funding to any technology other than fiber. Undoubtedly, some states will fund other technologies, but my prediction is that most states will give most of the money to fiber projects.

Defining Served / Unserved Areas Will be a Mess. The grants attempt to improve broadband in areas with existing speeds under 25/3 Mbps. This insistence in sticking with measuring speeds will create a huge mess. Communities know that rural speeds are slower less than this, but if the broadband maps remain wrong, they will have to somehow prove it. It would have been so much simpler for the grants to be eligible to overbuild DSL with no speed test. I’m sure these requirements came from lobbying from big telcos, and we also don’t seem able to break away from the dreadful FCC broadband map databases.

A smart state might base grant awards upon state-generated broadband maps, but even that is going to be controversial since incumbent telcos will have a chance to challenge any grant request. Huge parts of the country have been wrongfully locked out of federal grants in the past due to the FCC database, and this is the one big chance to put that behind us. Unfortunately, there will still be communities that get behind by these grants.

Many States are Not Ready for This Funding. A lot of the states only recently started to form state broadband offices, and the size of these grants and the sheer volume of paperwork will overwhelm the people who award grants. There is also a disturbing trend right now of the existing employees of broadband offices bailing to take jobs in the industry. Handling these grants properly is going to require grant reviewers with a lot of expertise to wade through the many grant requests. In this over-busy industry, I don’t know where states will find the experienced people needed to do this right.

Overlapping Grant Requests. The dollar amount of the grant pool is so huge that the states are going to get multiple grant requests that ask to serve the same areas. I’m predicting states will face an almost unsolvable puzzle trying to figure out who to fund in these situations. Just to give an example, I live in North Carolina, and I won’t be surprised if Charter files a grant request to serve most of the state. In doing so, Charter will conflict with most other grant requests – many of which will also overlap with each other.

Big ISPs Want to Be Major Players. Many big ISPs have been recently signaling that they will be seeking huge funding from these grants. AT&T alone said it hopes to use these grants to pass five million new homes. Big ISPs have some major advantages in the grant process. They will have no problem guaranteeing matching funds. They will likely ask for grants that cover large areas, which is going to be tempting for grant offices trying to award the funds. The push by big ISPs creates a dilemma for states since citizens clearly prefer local ISPs run by local people over the corporate indifference of giant ISPs.

Mediacom and West Des Moines

In 2020, the City of West Des Moines, Iowa announced it was building a fiber conduit network to bring fiber to pass all 36,000 residents and businesses in the city. It was a unique business model that can best be described as open-access conduit. What is unique about this arrangement is that conduit will be built along streets and into yards and parking lots to reach every home and business. The City is spending the money up front to cross the last hundred feet.

The City’s announcement also said that the conduit network is open access and is available to all ISPs. Google Fiber was announced as the first ISP tenant and agreed to serve everybody in the city. This means that Google Fiber will have to pay to pull fiber through the conduit system to reach customers.

Mediacom, the incumbent cable company in the city, sued West Des Moines and argued that the City had issued municipal bonds for the benefit of Google Fiber. The suit also alleges that the City secretly negotiated a deal with Google Fiber to the detriment of other ISPs. The suit claims Google Fiber had an advantage since one of the City Commissioners was also the primary Google Fiber lobbyist in the state.

As is usual with such suits, outsiders have no idea of the facts, and I’m not taking sides with either of the parties. A recent article said the two sides are nearing a settlement, and if so, we might never understand the facts. I find the lawsuit to be interesting because it raises several interesting issues.

A lot of cities are considering open-access networks. Politicians and the public like the idea of having a choice between multiple ISPs. But this suit raises an interesting dilemma that cities face. If a city launches an open-access network with only one ISP, like in this case, that ISP gets a huge marketing advantage over any later ISPs. On an open-access network, no ISP has a technological advantage – every ISP that might come to West Des Moines will be providing fiber broadband.

If Google Fiber is first to market, it has an opportunity to sign everybody in the city who prefers fiber broadband over cable broadband. In the case of West Des Moines, each future ISP would also have to pay to pull fiber through the network, and a second ISP might have a hard time justifying this investment if Google Fiber already has a large market share.

From my understanding of the West Des Moines business model, the City needs additional ISPs to recover the cost of building the network – the City clearly intends to bring the benefits of open-access to its citizens. It’s hard to believe the City would intentionally gave an unfair advantage to Google Fiber. But did they inadvertently do so by giving Google Fiber the chance to gain a lock-down market share by being first?

Another interesting question this suit raises is if Mediacom considered moving onto the fiber network? When somebody overbuilds a market with fiber, the cable company must be prepared to compete against a fiber ISP. But in West Des Moines and a few other open-access networks like Springfield, Missouri, the cable company has a unique option – the cable company could also jump onto the fiber network.

It would be interesting to know if Mediacom ever considered moving to fiber. The company already has most of the customers in the market, and one would think it could maintain a decent market share if it went toe-to-toe with Google Fiber or another ISP by also competing using fiber. It would be a huge decision for a cable company to make this leap because it would be an admission that fiber is better than coaxial networks – and this switch probably wouldn’t play well in other Mediacom markets. I also think that cable companies share a characteristic with the big telcos – it’s probably challenging for a cable company to swap to a different technology in only a single market. Every backoffice and operational system of the cable company is geared towards coaxial networks, and it might be too hard for a cable company to make this kind of transition. I’m always reminded that when Verizon decided to launch its FiOS business on fiber, the company decided that the only way to do this was to start a whole new division that didn’t share resources with the copper business.

Finally, one issue this suit raises for me is to wonder what motivates ISPs to join an open-access network in today’s market. I understand why small ISPs might do this – they get access to many customers without making a huge capital investment. But there is a flip side to that and there can be a huge financial penalty for an ISP to pursue open access rather than building a network. In the last few years, we’ve seen a huge leap-up in the valuation multiple applied to facility-based fiber ISPs. When it comes time for an ISP to sell a market, or even to leverage an existing market for borrowing money, a customer on a fiber network that is owned by an ISP might easily be worth ten times more than that same customer on a network owned by somebody else.

That is such a stark difference in value that it makes me wonder why any big ISP would join an open-access network. Open-access is an interesting financial model for an ISP because it can start generating positive cashflow with only a few customers. But is the lure of easy cash flow a good enough enticement for an ISP to forego the future terminal value created by owning the network? This obviously works for some ISPs like Google Fiber, which seems to only want to operate on networks owned by others. But consider a small rural telco that might be located outside of West Des Moines. The telco could generate a much higher value by building to a few thousand customers in a market outside West Des Moines than by adding a few thousand customers on the open-access network.

The giant difference in terminal value might explain why open-access networks have such a hard time luring ISPs. It probably also answers the question of why a cable company like Mediacom is not jumping to join somebody else’s network. It’s an interesting financial debate that I’m sure many ISPs have had – it it better to go for the quick and easy cash flow from open-access or take more risks but hope for the much bigger valuation from building and owning the network and the customers?

The Fight Over 12 GHz Spectrum

For an agency that has tried to wash its hands from regulating broadband, the FCC finds itself again trying to decide an issue that is all about broadband. There is a heavyweight battle going on at the FCC over how to use 12 GHz spectrum, and while this may seem like a spectrum issue, it’s all about broadband.

12 GHz spectrum is key to several broadband technologies. First, this is the spectrum that is best suited for transmitting data between the earth and satellite constellations. The only way Starlink is going to be able to grow to serve millions of remote customers in the U.S. is by having enough backhaul to fuel the huge amounts of data that will be passed to serve that many customers. Lack of backhaul bandwidth will significantly limit the total number of customers that can be served and is an obvious major concern of the satellite companies.

It turns out that 12 GHz is also the best spectrum for transmitting large amounts of data with 5G. The carriers have been dabbling with the higher millimeter-wave spectrum, but it’s turning out that there are squirrelly aspects of millimeter-wave spectrum that make it less than ideal in real-world wireless deployments. The 12 GHz spectrum might be the best hope for carriers to be able to deliver gigabit+ wireless drops to homes. Verizon has been deploying fiber-to-the-curb technology using mid-range spectrum and seeing speeds in the range of 300 Mbps. Using the 12 GHz spectrum could provide a reliable path to multi-gigabit wireless drops.

The big question facing the FCC is if 12 GHz can somehow be used to satisfy both needs, pitting the 5G carriers against the satellite carriers. As an aside, before talking more about the issue, I must observe that the satellite companies bring a new tone into FCC proceedings. Their FCC filings do everything except call the other side a bunch of dirty scoundrels. Probably only those who read a lot of FCC documents would notice this, but it’s something new and refreshing.

The current argument before the FCC comes from filings between Starlink and RS Access, which is associated with Michael Dell, who owns a lot of the spectrum in question. But this is part of the larger ongoing battle, and there have been skirmishes that also involved Dish Networks, which is the largest owner of this spectrum.

The FCC will have to somehow untie the Gordian knot on a tough issue. As is to be expected with any use of spectrum, interference is always a major concern. The usefulness of any band of spectrum can be negated by interference, so carriers only want to deploy wireless technologies that have minimal and controllable interference issues. Both sides in the 12 GHz fight have trotted out wireless engineers who support their positions. RS Access says that spectrum can be shared between satellite and terrestrial usage, supporting the idea of not giving more spectrum solely to Starlink. Starlink says the RS Access engineers are lying and wants dedicated spectrum for satellite backhaul. I don’t know how the FCC can sort this out because the only way to really know if spectrum can be shared is to try it.

What I find most unusual about the fight is that the FCC is being dragged into a broadband issue. The last FCC Commission, Ajit Pai, did his best to wash broadband out of the vocabulary at the FCC. But in today’s world, almost everything the FCC does, other than perhaps chasing robocallers, is ultimately about broadband. While this current 12 GHz fight might look like a spectrum battle to an outsider, it’s all about broadband.

Broadband Labels

There is one quiet provision of the Infrastructure Investment and Jobs Act that slipped under the radar. Congress is requiring that the FCC revamp broadband labels that describe the broadband product to customers, similar to the labels for food.

The Act gives the FCC one year to create regulations to require the display of a broadband label similar to the ones created by the FCC in Docket DA 16-357 in 2016. A copy of the FCC’s suggested broadband label from 2016 is at the bottom of this blog. The original FCC docket included a similar label for cellular carriers.

ISPs are going to hate this. It requires full disclosure of prices, including any special or gimmick pricing that will expire. ISPs will have to disclose data caps and also any hidden charges.

As you can see by the label below, it includes other information that big ISPs are not going to want to put into writing, such as the typical download and upload speeds for a broadband product as well as the expected latency and jitter.

To show you how badly big ISPs don’t want to disclose this information, I invite you to search the web for the broadband products and prices for the biggest ISPs. What you are mostly going to find is advertising for special promotions and very little on actual prices and speeds. Even when it’s disclosed it’s in small print buried somewhere deep in an ISP website. And nobody talks about latency and jitter.

What is even harder for ISPs is that they often don’t know the speeds. How does a telco describe DSL speeds when the speed varies by distance from the hub and by the condition of the copper wire on each street. I’ve seen side-by-side houses with different DSL speeds. Cable companies can have a similar dilemma since there seem to be neighborhoods in every city where the network underperforms – most likely due to degradation or damage to the network over time.

The sample label asks for the typical speed. Are ISPs going to take the deceptive path and list marketing speeds, even if they can’t be achieved? If an ISP tells the truth on the labels, shouldn’t it be required to submit the same answers to the FCC on the Form 477 data-gathering process?

I’m sure that big ISPs are already scrambling trying to find some way out of this new requirement, but that’s going to be hard to do since the directive comes from Congress. It’s going to get interesting a year from now, and I can’t wait to see the labels published by the biggest ISPs.

Big Internet Outages

Last year I wrote about big disruptive outages on the T-Mobile and the CenturyLink networks. Those outages demonstrate how a single circuit failure on a transport route or a single software error in a data center can spread quickly and cause big outages. I join a lot of the industry in blaming the spread of these outages on the concentration and centralization of networks where the nationwide routing of big networks is now controlled by only a handful of technicians in a few locations.

In early October, we saw the granddaddy of all network outages when Facebook, WhatsApp, and Instagram all crashed for much of a day. This was a colossal crash because the Facebook apps have billions of users worldwide. It’s easy to think of Facebook as just a social media company, but the app of suites is far more than that. Much of the third world uses WhatsApp instead of text messaging to communicate. Small businesses all over the world communicate with customers through Facebook and WhatsApp. A Facebook crash also affected many other apps. Anybody who automatically logs into other apps using the Facebook login credentials was also locked out since Facebook couldn’t verify their credentials.

Facebook blamed the outage on what it called routine software maintenance. I had to laugh the second I saw that announcement and the word ‘routine’. Facebook would have been well advised to have hired a few grizzled telecom technicians when it set up its data centers. We learned in the telecom industry many decades ago that there is no such thing as a routine software upgrade.

The telecom industry has long been at the mercy of telecom vendors that rush hardware and software into the real world without fully testing it. An ISP comes to expect to have issues in glitches when it is taking part in a technology beta test. But during the heyday of the telecom industry throughout the 80s, and 90s, practically every system small telcos operated was in beta test mode. Technology was changing quickly, and vendors rushed new and approved features onto the market without first testing them in real-life networks. The telcos and their end-user customers were the guinea pigs for vendor testing.

I feel bad for the Facebook technician who introduced the software problem that crashed the network. But I can’t blame him for making a mistake – I blame Facebook for not having basic protocols in place that would have made it impossible for the technician to crash the network.

I bet that Facebook has world-class physical security in its data centers. I’m sure the company has redundant fiber transport, layers of physical security to keep out intruders, and fire suppression systems to limit the damage if something goes wrong. But Facebook didn’t learn the basic Telecom 101 lesson that any general manager of a small telco or cable company could have told them. The biggest danger to your network is not from physical damage – that happens only rarely. The biggest danger is from software upgrades.

We learned in the telecom industry to never trust vendor software upgrades. Instead, we implemented protocols where we created a test lab to test each software upgrade on a tiny piece of the network before inflicting a faulty upgrade on the whole customer base. (The even better lesson most of us learned was to let the telcos with the smartest technicians in the state tackle the upgrade first before the rest of us considered it).

Shame on Facebook for having a network where a technician can implement a software change directly without first testing it and verifying it a dozen times. It was inevitable that a process without a prudent upgrade and testing process would eventually result in the big crash we saw. It’s not too late for Facebook – there are still a few telco old-timers around who could teach them to do this right.