Reflecting on AT&T

I was talking to somebody about AT&T recently – we both worked at the company before the divestiture of the company into the Baby Bells in 1984. This set me to contemplate the odd path the company has taken since the days when it was perhaps the premier U.S. corporation.

AT&T was divested as a long-distance company in 1984 and thrown into a competitive environment where long distance rates and revenues plummeted. AT&T’s fortunes and status decreased to the point where SBC, Southwestern Bell, was able to acquire the company in 2005 while keeping the AT&T brand name.

The reunited Baby Bell companies and AT&T were far diminished from the days when AT&T was at the top of the world. SBC and the other Baby Bells started to cut back on the maintenance and upgrade of copper infrastructure soon after the divestiture. The companies felt emboldened to do this since divestiture also brought the beginning of telephone deregulation. The big telcos were no longer strictly required to meet quality and performance standards, and they responded by trimming technicians and capital repair and upgrade budgets.

During the 1990s, AT&T turned its attention to becoming the largest cellular carrier. The company spent most of its capital in the 1990s on cellular networks, which was timed perfectly with the explosion of the cellular business where practically everybody in the country came to have a cellphone. But even in the cellular world, AT&T didn’t put as much money into its cellular infrastructure and spectrum as its competitors. When AT&T won an exclusive contract to market the iPhone in 2007, it quickly became clear to customers that the AT&T (Cingular at the time) network was inadequate.

AT&T next made several devastatingly bad investments. It bought DirectTV, which then lost half of its customers in a few ensuing years. AT&T was also apparently trying to keep up with Comcast when it spent $100 million to buy Warner Media. A few years later, AT&T unspun this deal and recognized a $47 billion loss to shareholders.

In the last decade, AT&T has been forced to spend a lot of money to upgrade its 4G and 5G networks. While cellular performance has improved dramatically for consumers, 5G still looks like a business plan looking for a revenue stream. Over the last decade, cellular competition has resulted in lower cellular prices for consumers, and it can be argued net 5G revenues for the industry have been a big negative. And now, the biggest cable companies are siphoning off valuable cellular market share.

AT&T and the other big telcos might also be facing an expensive effort to remove lead cables from the environment. Smaller telcos mostly replaced lead cables a long time ago, but it seems the big telcos never quite got around to getting rid of the lead.

AT&T has finally gotten serious over the last few years about building last-mile fiber networks for the future. The company built 500,000 fiber passings in the second quarter of this year to bring it up to 20.2 million fiber passings – with a goal to reach 30 million by the end of 2025. AT&T added 272,000 fiber customers in the second quarter to bring the company to over 7.7 million fiber subscribers. The company is still losing non-fiber customers and dropped 25,000 net broadband customers in the second quarter.

AT&T is late to the game compared to its cellular competitors in selling FWA cellular broadband and just rolled out its Internet Air product in April of this year. AT&T CEO John Stankey characterizes the company’s FWA plans as being used to replace copper infrastructure and perhaps to bid on BEAD grants in remote areas. But for now, the company is far behind Verizon and T-Mobile in selling cellular home broadband. But AT&T recently announced it now signing a ‘few thousand’ FWA customers daily.

It not particularly easy to equate AT&T with some of the recent events in the company, because for all practical purposes, the company has been run by folks from SBC. But a lot of mistakes have been made in AT&T’s name, and it’s somewhat sad to see how far the company has fallen since the early 1980s. AT&T has made mistakes that would have sunk a lot of other businesses, but it is still diverse enoughto generate the cash to keep trying over and over again.

Is Jitter the Problem?

Most people assume that when they have broadband issues they don’t have fast enough broadband speeds. But in many cases, problems are caused by high jitter and latency. Today, I’m looking at the impact of  jitter.

What is Jitter? Jitter happens when incoming data packets are delayed and don’t show up at the expected time or in the expected order. When data is transmitted over the Internet it is broken into small packets. A typical packet is approximately 1,000 bytes or 0.001 megabytes. This means a lot of packets are sent to your home computer for even basic web transactions.

Packets are created at the location originates a web signal. This might be a site that is streaming a video, sending a file, completing a voice over IP call, or letting you shop online. The packets are sent in the order that the original data stream is encoded. Each packet takes a separate path across the Internet. Some packets arrive quickly, while others are delayed for some reason. Measuring jitter means measuring the degree to which packets end up at your computer late or in the wrong order.

Why Does Jitter Matter? Jitter matters the most when you are receiving packets for a real-time transaction like a streaming video, a Zoom call, a voice over IP call, or a video connection with a classroom. Your home computer is going to do its best to deliver the transmissions on time, even if all the packets haven’t arrived. You’ll notice missing packets of data as pixelation or fuzziness in a video, or as poor sound quality on a voice call. If enough packets are late, you might drop a VoIP call or get kicked out of a Zoom session.

Jitter doesn’t matter as much for other kinds of data. Most people are not concerned if it takes slightly longer to download a data file or to receive an email. These transactions don’t show up as received on your computer until all (or mostly all) of the packets have been received.

What Causes Jitter? The primary cause of jitter is network congestion. This happens when places in the network between the sender and the receiver are sent more data packets than can be processed in real time.

Bandwidth constraints can occur anywhere in a network where there is a possibility of overloading the capacity of the electronics. The industry uses the word chokepoint to describe any place where data can be restricted. On an incoming data transmission, an ISP might not have enough bandwidth on the incoming backbone connection. Every piece of ISP network gear that routes traffic within an ISP network is a potential chokepoint – a common chokepoint is where data is handed off to a neighborhood. The final chokepoint is at the home if data is coming in faster than the home broadband connection can handle it.

A common cause of overloaded chokepoints is old or inadequate hardware. An ISP might have outdated or too-small switches in the network. The most common chokepoints at homes are outdated WiFi modems or older computers that can’t handle the volume of incoming data.

One of the biggest problems with network chokepoints is that any time that an electronics chokepoint gets too busy, packets can be dropped or lost. When that happens, your home computer or your ISP will request the missing packets be sent again. The higher the jitter, the more packets that are lost and must be sent multiple times, and the greater the total amount of data being sent through the network. With older and slower technologies like DSL, the network can get paralyzed if failed packets accumulate to the point of overwhelming the technology.

Contrary to popular belief, faster speeds don’t reduce jitter, and can actually increase it. If you have an old inadequate WiFi modem and upgrade to a faster technology like fiber, the WiFi model will be even more overwhelmed than it was with a slower bandwidth technology. The best solution to lowering jitter is for ISPs and customers to replace equipment that causes chokepoints. Fiber technology isn’t better just because it’s faster – it also includes technology that move packets quickly through chokepoints.

FCC Considering New Rules for Data Breaches

Back in January of this year, the FCC issued a Notice of Proposed Rulemaking in WC Docket No. 22-21 that proposes to change the way that ISPs and carriers report data breach to the FCC and to customers. The proposed new rules would modify some of the requirements of the customer proprietary network information (CPNI) rules that were originally put into place in 2007.

Since the 2007 CPNI order, all fifty states have adopted a version of the CPNI rules as well as rules from federal agencies like the Federal Trade Commission, the Cybersecurity and Infrastructure Agency, and the Securities Exchange Commission. The FCC is hoping to strengthen the rules on reporting data breaches since it recognizes that data breaches are increasingly important and can be damaging to customers.

The FCC completed a round of initial and reply comments by the end of March 2023, but is not expected to make a final order before the end of this year.

The current FCC rules for data breaches require carriers to notify law enforcement within seven days of a breach using an FCC portal that forwards a report to the Secret Service and the FBI. After a carrier has notified law enforcement, it can opt to notify customers, although that is not mandatory. One of the reasons this docket was initiated is that carriers have kept quiet about some major data breaches. The new rules would require carriers to provide additional information to the FCC and law enforcement. The new requirements also eliminate any waiting period, and carriers would be required to notify law enforcement and customers “without unreasonable delay”. The only exception to rapid customer notification would be if law enforcement asks for a delay.

The FCC is proposing new reporting rules that it says will better protect consumers, increase security, and reduce the impact of future breaches. There was a lot of pushback from carriers in comments to the docket that centered on two primary topics – the definition of what constitutes a data breach, and the requirement of what must be told to customers.

The FCC wants to expand the definition of data breach to include the inadvertent disclosure of customer information. The FCC believes that requiring the disclosure of accidental breaches will incentivize carriers to adopt more strenuous data security practices. Carriers oppose the expanded definition since disclosure would be required even when there is no apparent harm to customers.

Carriers also oppose the quick notification requirements. Carriers argue that it takes time to  understand the breadth and depth of a data breach and to determine if any customers were harmed. Carriers also need to be working immediately after discovering breach to contain and stop the problem.

Carriers are opposed to the FCC suggestions of what must be disclosed to customers. The FCC wants to make sure that customer notices include everything needed for customers to react to the breach. Carriers say that assembling the details by customer will take too long and could leave customers open to further problems. Carriers would rather make a quick blanket announcement instead of a detailed notice to specific customers.

One of the interesting nuances of the proposed rules is that there would be two types of notifications required – one for inadvertent leaks and another for what the FCC calls a harms-based notification. This would require a carrier to notify customers based on the specific harm that was caused.  Carriers were generally in favor of the harms-based approach but didn’t want to confuse customers by notifying them of every inadvertent breach that doesn’t cause any harm.

Consumer advocates opposed allowing only the harm-based trigger, because it allows a carrier to decide when a breach causes harm. They fear that carriers will under-report harm-based breaches.

These rules would apply to all ISPs and carriers, regardless of size. While it might still be some months before any new rules become effective, small ISPs ought to use this impending change as a reason to review data security practices and the ability to notify customers.

Revisiting the BEAD Letter of Credit

I recently agreed to sign a letter to the NTIA that asks the agency to eliminate the BEAD requirement that grant recipients must have an irrevocable standby letter of credit (LOC) to apply for a BEAD grant. This letter was signed by over 300 folks in the industry including ISPs, local government, policy experts, and industry associations. I sign very few documents like this, but the letter of credit requirement is a terrible policy – and is a big concern to many of my clients.

To explain an irrevocable letter of credit in plain English, anybody winning a BEAD grant must set aside almost the same amount of cash as the amount of grant matching from the day that the grant is awarded through the completion of the grant construction process.

A letter of credit to satisfy the NTIA must come from an FDIC bank with Weiss rating of B- or better for 25% of the award amount. A letter of credit is a specific kind of negotiable instrument where a bank guarantees that the bank will fund any shortfalls if a grant fails in its financial obligation. If a grant applicant fails to complete the construction of the grant, the money in the LOC would likely be claimed by NTIA or the state grant office (still unclear on the details).

Banks will not issue a letter of credit without having liquid assets or collateral equal to the amount of the LOC. That means a grant applicant must not only have enough cash or borrowing for its grant matching fund commitment, but the applicant must also set aside a large amount of hard cash as a guarantee for the LOC. The letter to the NTIA uses an example of an ISP that want to fund a $10 million project using a 75% BEAD grant. In this example, the ISP would get $7.5 million from the grant. It would need to have $2.5 million available for the matching fund. It would need to set lock up another $2.1 million for the letter of credit. That makes it incredibly expensive for an ISP to seek a BEAD grant. And FYI, this example is too conservative – grant recipients also must finance the operating costs of launching a grant project since those expenses are not covered by grants.

To make matters even worse, banks charge interest on a letter of credit because the bank must set aside a corresponding portion of its own equity to support the letter of credit. The cashed tied up by a bank for an LOC can’t be used to make other loans – so the bank must charge interest.

This is a huge problem for many reasons. Anybody but the largest ISPs will have a hard or impossible time getting a letter of credit. Most ISPs don’t accumulate cash because the best use of cash for most ISPs is to continue to build more infrastructure. A large percentage of ISPs will not have the cash available up front to support the letter of credit. Many cities and municipalities are legally barred from buying a letter of credit.

There is some question if the banking industry as a whole is willing to float over $10 billion in letters of credit for BEAD grants. The banking industry is under a huge amount of stress due to high interest rates. Banks are far less interested in making any kind of infrastructure loans today when interest rates are high – because the bank’s risk is much higher than normal. I know ISPs that have been told by their current bank that they are not interested in issuing a letter of credit – and the chance of getting a LOC from a bank that doesn’t know an ISP is slim.

There is no reason for this requirement – or at least no reason for it to be so draconian. The NTIA is insisting on a letter of credit because it doesn’t want to be embarrassed by projects that don’t get completed. This requirement is a massive advantage for large ISPs over smaller ones, but even large ISPs hate this requirement. There are many successful broadband grant programs that don’t require a draconian letter of credit. There are other ways to provide assurance to a state grant office, like performance bonds or issuing grant funds in tranches as milestones are met.

Hopefully, the press from this letter will get the NTIA to reconsider its position. The requirement for the extreme version of a letter of credit is overkill. The letter of credit is going to stop a lot of ISPs from being able to ask for BEAD funds – the local ISPs that customers prefer. Maybe most germane is that requiring a letter of credit might actually drive more projects to fail as ISPs struggle to support the interest payments on an LOC.

One More Mapping Challenge

There is still one more upcoming map challenge to try to fix errors in broadband maps for purposes of the upcoming BEAD grants.

The NTIA is requiring state broadband offices to have one more mapping challenge at the state level before the state can issue broadband grants. The NTIA issued a sample template for a state challenge process, but each state is allowed to develop its own challenge process. States are not required to wait for an update in the FCC mapping system before using any updated information when awarding grants.

The NTIA suggests that challenges can be made by ISPs who are considering asking for a BEAD grant. NTIA also suggests that states accept challenges from the public, and I assume that includes challenges from cities and counties as well.

This is the challenge that a lot of folks have been waiting for because there are still a lot of inaccuracies in the FCC maps. While some states did a vigorous review of the FCC maps and asked for map updates – many states did not. Some counties also put an effort into correcting the FCC maps – but many did not. This is the final chance to get locations declared as eligible for BEAD grants. I assume that States will not accept locations for BEAD grants that are not in the corrected maps.

This challenge is also the one that folks have been waiting for since the NTIA suggests that there can be a challenge against the claimed broadband speeds. A lot of the early map challenges had to do with getting the mapping fabric right – which is the database that is used to define the location of the homes and businesses in the country.

My consulting firm has been working with communities, and we are still seeing a lot of inaccurate information. In every county we have examined, we find ISPs claiming speeds of 100/20 Mbps or faster that are not supported by Ookla speed tests. We’re also finding coverage errors in the maps where ISPs are reporting homes as covered that are not. A lot of the earlier challenges fixed coverage problems that were grossly incorrect, but it takes a lot more effort to find smaller pockets of ten or twenty homes that can’t buy good broadband but for which some ISP claims coverage.

Many of the problems in the FCC maps are directly due to the FCC rules for ISPs to report broadband for the maps. ISPs are allowed to claim marketing speeds for broadband instead of the actual speed delivered. There are far too many cases where the advertised marketing speed is much faster than what is being delivered. ISPs can also claim areas as covered by broadband where the ISP can supposedly provide broadband in ten working days. Finally, we often find ISPs claiming broadband coverage where an engineering field review doesn’t find any of the claimed technology.

The mapping is only an issue for BEAD because the IIJA legislation that created the BEAD grants insisted that FCC mapping must be used to allocate grants. I’m sure that language was inserted into the legislation at the insistence of the big ISP lobbyists to make sure that grant funds were not used to ‘overbuild’ existing broadband. At the time the IIJA legislation was passed, the FCC maps were atrocious. They have now been improved to the point where I would say they are now merely dreadful – but nobody believes the FCC maps are accurate. Most people only have to look around their immediate neighborhood on the FCC maps to find a few overstatements of coverage. My team has looked in great detail at perhaps a dozen counties and found a lot of mapping errors. I can’t even begin to think what that means on a national scale.

Unfortunately, most people in the country have no idea how this complicated BEAD process works. After the grants have been awarded, I expect we’ll start to hear from unserved homes that are not going to be covered by a BEAD grant. I believe this is going to be a lot more homes than anybody at the NTIA, the FCC, or state broadband offices wants to acknowledge.

Hopefully, the ISPs who want to file BEAD grants will take a shot at cleaning up the map errors now. That’s the only way to get grant funding for locations that are underserved but which don’t show that on the FCC maps. Everybody interested in doing this needs to pay attention to the state broadband office. States will first issue a plan to the FCC describing the way it will conduct the mapping challenge. These plans will likely have a 30-day opportunity for public comments. If you don’t like the map challenge rules, holler! Sometime later, states will hold the mapping challenge, and most will likely have a narrow time window to file challenges.

What Happened to Quantum Networks?

A few years ago, there were a lot of predictions that we’d see broadband networks converting to quantum technology because of the enhanced security. As happens with many new technologies, quantum computing is advancing at a slower pace than the wild predictions that accompanied the launch of the new technology.

What are quantum computing and quantum networks? The computers we use today are all Turing machines that convert data into bits represented by either a 1 or a 0 and then process data linearly through algorithms. Quantum computing takes advantage of a property found in subatomic particles called superposition, meaning that particles can operate simultaneously in more than one state, such as an electron that is at two different levels. Quantum computing mimics this subatomic world by creating what are called qubits, which can exist as both a 1 and a 0 at the same time. One cubit can perform two calculations at once, but when many cubits are used simultaneously, the number of simultaneous calculations grows exponentially. A four-cubit computer can perform 24 or 16 calculations at the same time. Some quantum computers are currently capable of 1,000 cubits, or 21000 simultaneous calculations.

We are starting to see quantum computing in the telecom space. In 2020, Verizon conducted a network trial using quantum key distribution technology (QKD). This uses a method of encryption that might be unhackable. Photons are sent one at a time alongside an encrypted fiber optic transmission. If anybody attempts to intercept or listen to the encrypted light stream, the polarization of the photons is impacted, and the sender and receiver of the message both know instantly that the transmission is no longer safe. The theory is that this will stop hackers before they can learn enough to crack into and analyze a data stream. Verizon also added a second layer of security using a quantum random number generator that updates the encryption key randomly in a way that can’t be predicted.

A few months ago, EPB, the municipal fiber provider in Chattanooga, announced a partnership with Qubitekk to let customers on the City’s fiber network connect to a quantum computer. The City is hoping to attract companies to the City that want to benefit from quantum computing. The City has already heard from Fortune 500 companies, startups, and government agencies that are interested in using the quantum computer links.

EBP has established the quantum network separate from its last-mile network to accommodate the special needs of a quantum network transmission. The quantum network uses more than 200 existing dark fibers to establish customer links on the quantum network. EPB engineers will constantly monitor the entangled particles on the quantum network.

Quantum computing is most useful for applications that require large numbers of rapid calculations. For example, quantum computing could produce faster and more detailed weather maps in real time. Quantum computing is being used in research on drugs or exotic materials where scientists can compare multiple complex molecular structures easily. One of the most interesting current uses is that quantum computing can greatly speed up the processing power of artificial intelligence that is now sweeping the world.

It doesn’t look like quantum networking is coming to most fiber networks any time soon. The biggest holdup is the creation of efficient and cost-effective quantum computers. Today, most of these computers are in labs at universities or government facilities. The potential for quantum computing is so large that the technology could explode onto the scene when the hardware issue is solved.

Defining Affordable Broadband

One of the requirements for the $42.5 billion BEAD grants that come directly from the Infrastructure Investment and Jobs Act legislation is that broadband should be affordable for middle-class families. The specific legislative requirement is that, “High-quality broadband services are available to all middle-class families . . . at reasonable prices.” The NTIA that oversees the BEAD grants has not defined a benchmark for an affordable middle-class price, so State broadband offices are on their own to decide how to handle this requirement.

Pew Charitable Trusts took a shot at defining affordable middle-class broadband in a recent study. Pew based affordability upon an FCC study in 2016 that concluded that the average middle-class family can afford to pay as much as 2% of household income on broadband. Pew is not recommending that States automatically adopt the 2% definition – instead, they looked at how that benchmark would be calculated in various parts of the country.

Pew defined middle-class household incomes to be between $40,000 and $150,000 annually. That’s a somewhat simplistic assumption in that the definition of middle-class also depends on the number of family members. Pew found that between 51% (in the South) and 57% (in the Midwest) of households are classified as middle-class using that income range.

Household incomes vary significantly across the country – but so does the cost of living. The Pew article calculates the monthly affordable broadband rate set at 2% of average middle-class incomes for both states and regions. The results are interesting. The highest affordable rate using the 2% definition is in the Northeast at $107.65 per month. In the South, the rate would be $84.79. The national average affordable rate set at 2% is $93.21. States vary even more widely – the highest affordable rate at the 2% benchmark is in Rhode Island at $150.73 per month, and is lowest in Mississippi at $68.53.

One of the reasons that Pew doesn’t like the FCC’s 2% definition is that there are a lot of middle-class homes that can’t afford the rate that would be established for their state or region. For example, 28% of middle-class homes in the Northeast that are considered to be middle-class could not afford the $107.65 rate.

Pew shows that States have another challenge in trying to meet this grant requirement. States have no good data on existing rates for broadband. ISPs have a wide array of ways that they price broadband that includes offering special rates to some customers for term contracts, burying broadband rates in a bundle so that nobody knows what broadband costs, and adding hidden fees like an expensive modem in order to buy broadband. It’s hard to set a benchmark rate for broadband when it’s nearly impossible to define what the public is paying today for broadband.

The big question is how States might use an affordable middle-class rate. Federal, state, and local governments have no regulatory authority to set or approve broadband rates. The FCC theoretically had this ability until the Ajit Pai FCC eliminated Title II regulatory authority over broadband. However, no past FCC ever considered regulating broadband rates, even when they had the authority.

This raises the question of what a States might do once it determines an affordable middle-class rate. A broadband office can’t require that ISPs have rates under any benchmark it establishes. It even seems problematic if a broadband office uses prices as one of the criteria for awarding grants.

The first day I read the BEAD grant legislation, I knew that middle-class affordability requirement was going to be a challenge. I’m not sure there is a good answer for how a State can do this, and I’m sure they are all still puzzled.

The Power of a Letter of Support

The newly released Virginia proposed BEAD grant rules highlight an issue that was included in the original grant rules. The BEAD grants give significant power to local governments through local letters of support.

ISPs have always asked for letters of support for broadband grants, and most communities have handed them out like Halloween candy. There was no reason not to support anybody who wanted to build better broadband, so a community would reflexively give a letter of support to most ISPs who asked for one.

The BEAD grants are different, and communities need to carefully weigh giving letters of support. The Virginia BEAD grants rules – and I think most other states as well – are going to award a significant amount of grant scoring points for an ISP that gets a local letter of support. In the Virginia grant scoring, a letter of support represents 10% of the total points needed for an ISP trying to win a grant.

Since most BEAD grants are going to be awarded in rural areas, County governments are the local government entities that will matter the most for BEAD. A County needs to carefully think about the ISPs it want to support – if the County provides a local support letter for only one ISP, that ISP has an instant advantage over other ISPs in the grant scoring.

If a County gives every ISP a support letter, it’s the same as if you endorsed nobody because all ISPs will score the same in the BEAD grant scoring.

I’ve been working with counties all over the country, and many of them have a strong preference for who wins the grant funding. For instance, a County might have a strong preference for supporting fiber over wireless technology. A County might prefer to support local ISPs over large ones, or support a large ISP already operating in the County over a newcomer. Counties often have a strong preference, and the letter of support is a way to express these wishes.

The Virginia BEAD grant rules are also interesting because the State gives grant scoring points to ISPs that visit with local governments and explain who they are and their plans. There is no guarantee that other states will have the same requirement, but it’s a good one. If a County is going to decide which ISPs you want to support, you need to meet and hear from them. ISPs pursuing BEAD grants will differ in important ways. Technology differences are one obvious way, but there are many others. Counties care a lot about broadband prices and might strongly prefer an ISP that promises low rates. A County might care about issues like the location of technicians and customer service – will there be jobs created in the County?

If a County government wants to use the letter of credit to its best advantage, the County will have to choose the ISP or ISPs you are willing to support. Even if ISPs are not required to visit you, like in the Virginia rules, you are going to want to talk with them. In the past, I’ve seen ISPs ask for letters of support a week before grants are due – that is not going to cut it if the letter of support means something.

The bottom line is that BEAD grant rules are giving a County a power it never had before – a chance to influence who wins broadband grants. This is the equivalent of a County voting for the ISP it wants to win the grant – the County will be helping to pick winners and losers.

The one downside to the process is that it won’t be particularly comfortable for a County to tell some ISPs they won’t get a support letter. But if a County has a strong preference about who will provide broadband for the next fifty years, it should exercise this power.

Counties need to read the State broadband grant rules when they are published to understand the importance of the letter of support. A County has power if the grant scoring rules award points to an ISP for having a local letter of support.

The Causes of Network Outages

The Uptime Institute (UI) is an IT industry research firm that is best known for certifying that data centers meet industry standards. UI issues an annual report that analyzes the cause of data center outages. The causes for data center outages is relevant to the broadband industry, because the same kinds of issues shut down switching hubs and Network Operations Centers.

The following table shows the underlying cause of the network outages in 2022 that were severe enough to be publicly reported.

Publicly
Reported
Cabling 9%
Capacity Issues 6%
Cooling 6%
Fiber Cut 17%
Fire 7%
IT Software Issues 18%
Network Connectivity 12%
Power 9%
Cyberattack 11%
Third Party 7%

UI cautions that it is somewhat skeptical of the stated reasons for outages since data center owners are notorious for lack of transparency. The report says that a large portion of outages are caused by human error and management failures. An increasing number of major outages are caused by cloud, collocation, ISP, or hosting companies.

Of growing concern is cyberattacks and ransomware, which accounted for 11% of major outages. These outages are often lengthy and can lead to contamination and loss of integrity of stored data. UI says that one of the reasons for increased security breaches is an increasing reliance on industry-standard operating systems and remote monitoring, which both create homogeneous systems that are easier for hackers to understand and breach.

6% of known outages in 2022 were ranked by UI as severe, meaning there was a major and damaging disruption of services, including large financial losses, compliance breaches, customer losses, and safety issues. Another 8% were considered to be serious, meaning the outage was still damaging.

The number of overall major outages is about the same as 2019 when the annual reports were first generated – in spite of the huge amounts of resources being spent to improve technology, software, and physical redundancy.

One interesting warning from the report is that UI cautions businesses not to rely on Service Level Agreements (SLAs) that promise 99.9%+ reliability. Many data center providers fall short of that level of performance, which is prossibly the reason for the lack of transparency.

Following are the major outages in 2022 by sector. Of particular concern to readers of this blog is telecommunications, which has seen a huge increase in the percentage of total outages since 2019.

Cloud / Internet Giant 19%
Digital Services 30%
Financial Services 7%
Government 7%
Telecommunications 32%
Transportation 5%

Cable Customer Losses in 2Q 2023

Leichtman Research Group recently released the cable customer counts for the largest providers of traditional cable service in the second quarter of 2023. LRG compiles most of these numbers from the statistics provided to stockholders, except for Cox and Mediacom – they now combine an estimate for both companies. Leichtman says this group of companies represents 96% of all traditional U.S. cable customers.

The traditional cable providers continue to lose customers at a torrid pace, losing over 1.6 million customers in the second quarter, slightly fewer losses than the second quarter of 2022. Overall, the traditional cable providers lost over 17,700 customers every day during the quarter. The overall penetration of traditional cable TV is now around 46% of all households, down from 73% at the end of 2017.

2Q 2022 Change Change
Comcast 14,985,000 (543,000) -3.5%
Charter 14,706,000 (200,000) -1.3%
DirecTV 12,350,000 (400,000) -3.1%
Dish Network 6,901,000 (197,000) -2.8%
Cox & Mediacom 3,340,000 (100,000) -2.9%
Verizon 3,155,000 (70,000) -2.2%
Altice 2,405,900 (69,900) -2.8%
Breezeline 296,952 (3,732) -1.2%
Frontier 267,000 (21,000) -7.3%
Cable ONE 158,100 (8,900) -5.3%
   Total 58,564,952 (1,613,532) -2.7%
YouTube 5,900,000 200,000 3.5%
Hulu Live 4,300,000 (100,000) -2.3%
Sling TV 2,003,000 (97,000) -4.6%
FuboTV 1,167,000 (118,000) -9.2%
Total Cable Company 35,733,852 (916,632) -2.5%
Total Telco / Satellite 22,673,000 (688,000) -2.9%
Total vMvPD 13,370,000 (115,000) -0.9%

It doesn’t look like people are replacing traditional cable with an online alternative like YouTube and Hulu Live – which collectively lost 115,000 customers in the quarter.

Charter is still losing customers at a slower rate than other traditional cable companies. At current trends, Charter ought to have the most cable customers soon – something that could not have been imagined only three or four years ago.

The biggest news is that Comcast is one of the biggest percentage losers, and the biggest overall loser, down 543,000 cable customers in the quarter. The biggest percentage losers continue to be Frontier and Cable ONE.