Challenging the FCC Broadband Maps

I’ve written many times about the absurdity of using the FCC mapping data for identifying areas with or without broadband. I’ve lately been looking at the FCC mapping data in West Virginia and New Mexico – two of the states with the worst broadband coverage in the country – and the FCC maps are atrocious. I see counties where the claimed broadband coverage in the FCC maps is wrong for more than half of the geographic area.

Unfortunately, the FCC is about to award $20.4 billion in RDOF grants later this year based solely on these dreadful maps. Luckily, there are other grant programs that allow grant applicants to challenge the FCC data. This includes the USDA ReConnect grants and many of the state grant programs.

One of the few ways to challenge the FCC maps is with speed tests. Anybody undertaking such a challenge needs to be aware that the incumbent telcos might challenge your speed test results, and unfortunately, some of their criticisms will be right. This means that anybody challenging the FCC maps has to take some steps to maximize the effectiveness of speed tests. Here are a few aspects of administering speed tests that should be considered.

  • A speed test needs to distinguish between cellular and landline connections. Rural folks with no broadband connection or those using cellular for home broadband are going to take the test with their cellphone. While such results are interesting, cellular speed tests can’t be mixed into a challenge of landline broadband coverage.
  • Everybody needs to use the identical speed test because each speed test measures speed using a different algorithm. Never use a speed test from the incumbents – it might be baked to show too good results.
  • A challenge can be most effective if it can get feedback from folks with no broadband available at their home. You need to somehow solicit and include results from folks that can’t take the speed tests.
  • You also should be aware a speed test sometimes doesn’t work for somebody with really slow broadband or high latency. We recently sat on the phone with somebody using satellite broadband and they couldn’t get the speed test to complete, even after many attempts.
  • The biggest challenge is in mapping the results. If you map the results so precisely that the results can be overlaid on individual homes on Google Earth, then you have provided an incumbent ISP the chance to challenge the test results. They can likely identify homes where they aren’t the ISP, or homes that have broadband that meets the FCC speed thresholds, meaning that slow speed test results might be due to poor WiFi or some other reason. Ultra-precise mapping might also violate the privacy of the people taking the speed test, This is an issue that many state speed test programs have wrestled with – some of them take such care to mask the precise location of the data that their final product can’t be used to challenge the FCC maps. For example, if speed test results are summarized by Census blocks then the results incorporate the same kinds of problems that are included in the FCC maps. Probably the best approach is to embed the final results in a pdf that is of low enough resolution to not identify individual homes.

There is one other way to map broadband coverage. An experienced field technician or engineer can drive around an area and can identify every broadband asset in the field. They can precisely identify where the cable TV networks end, down to the house. They can identify field DSLAMs that generate DSL signals out of rural cabinets – and they can often precisely identify the flavor of DSL and know the maximum speed capability of a given unit. They can identify the location and height of wireless transmitters and can map out the likely coverage areas. This kind of effort is most effective at identifying where there is no broadband, A good technician can make a decent map of the likely maximum broadband speeds available in a given area – something that is rarely achieved on most rural networks. This kind of challenge could be expensive and time-consuming, and I’ve never seen a challenge done this way. But I know engineers and technicians capable of making highly accurate maps.

Communities can tackle speed tests – they can get households to take the same speed test, such as the speed test from Ookla, and then match and map the results using GIS data. This can be a lot of work. Mapping can also be provided by many telecom engineering companies. One of the lowest-costs solutions is a speed test by Neo Partners that administers the speed test and overlays the speed test results automatically on Google maps.

Even if you aren’t challenging a grant, communities ought to consider speed tests to better understand the broadband in their community. As an example, I worked for a city where the speed tests showed that one neighborhood had far slower speeds than the rest of the city – something the city hadn’t known before the speed test. We’ve done speed tests that showed that the incumbent was delivering more than the advertised speed – again, something worth knowing.

Broadband Consumption Continues Explosive Growth

OpenVault Just released its Broadband Industry Report for 4Q 2019 that tracks the way that the US consumes data. The results of the reports are as eye-opening as OpenVault reports for the last few years. OpenVault has been collecting broadband usage for more than ten years.

As usual, the OpenVault statistics are a wake-up cry for the industry. The most important finding is that the average monthly data consumed by households grow by 27% from 2018 to 2019, and in the fourth quarter of 2019 the average home used 344 gigabytes of data, up from 275 gigabytes a year earlier. Note that consumption is a combination of download and upload usage – with most consumption being downloaded.

For the first time, the company compared homes with unlimited data plans to those that have plans with data caps. They reported that homes with no data caps used 353 gigabytes per month while homes with data caps used 337 gigabytes per month. That statistic would suggest that homes with data caps try to curb their usage to avoid overage charges.

Interestingly, median usage was significantly different than average usage. Median means the same as midpoint, and the median data usage was 191 gigabytes per month, meaning half of US homes used more than that and half used less. In looking at their numbers, I have to suppose that the median is a lot less than average due to the many homes using slow DSL that can’t consume a lot of broadband.

The report also looks at power users – homes that consume a lot of broadband. They report that nearly 1% of homes now use 2 terabytes per month and 7.7% use over 1 terabyte per month. A terabyte is 1,000 gigabytes. The percentage of homes using over 1 terabyte climbed from 4% a year earlier. This statistic is important because it shows a quickly increasing number of homes that will be hitting the 1 terabyte data caps of ISPs like Comcast, AT&T, Cox, and CenturyLink. I clearly remember Comcast saying just a few years ago that almost no homes had an issue with their data caps, but that no longer can be true.

Homes are starting to buy 1 gigabit broadband when it’s available and affordable. 2.8% of homes in the country now subscribe to gigabit speeds, up 86% from the 1.5% of homes that bought gigabit in 2018.

54% of homes now purchase broadband plans with speeds of 100 Mbps or faster. Another 23.6% of homes are subscribing to broadband between 50-75 Mbps. This means that nearly 78% of homes are subscribing to data plans of greater than 50 Mbps. The average subscribed speed grew significantly since 2018 from 103 Mbps to 128 Mbps. These subscriber statistics should shame the FCC for deciding to stick with the 25/3 Mbps definition of broadband. The agency is clearly setting a target speed for rural America that is far behind the reality of the marketplace.

OpenVault made one comparison to Europe and showed that we consume a lot more broadband here. While the US average consumption of broadband in 4Q 2019 was 344 gigabytes here, it was 196 gigabytes in Europe.

As OpenVault statistics have done in the past, they show network engineers that the demand for new broadband is not abating but is continuing to explode. An annual 27% increase in broadband consumption means that broadband demand is continuing to double every three years. If that growth rate is sustained, then our networks need to be prepared within a decade to carry 8.6 times more data than today. That’s enough to keep network engineers up at night.

Broadband in China

For years I’ve been hearing how we are losing the broadband battle with China, so I decided to take a look at the current state of broadband in the country. The China Internet Network Information Center (CNNIC) publishes statistics about the state of broadband in the country, and I used the Statistical Report on Internet Development in China from August 2019 in writing this blog.

Here are some of the more interesting statistics about the state of broadband in the country:

  • China is a lot larger than the US with a current population just below 1.4 billion, compared to an estimate of US population of around 327 million.
  • As of June 2019, China had 854 million people connected to the web in some manner, for an overall Internet penetration based on population of 61.2%. It’s not easy to compare that statistic to the US since we track Internet usage using subscriptions to households.
  • China is still rapidly adding people to the internet. In the first six months of 2019, the country added 26 million new Internet users.
  • The Chinese interface with the internet in a variety of ways, with the following statistics for June 2019:

Cellphone        847 million      99%

Desktop           394 million      46%

Laptop             308 million      36%

TV                     283 million      33%

Tablet              242 million      28%

  • As of June 2019, China had 396 million users on fiber-to-the-home. China is adding fiber faster than the US and there were over 67 million customers added for the year ending in June 2019.
  • Chinese speeds for landline connections averaged 31.3 Mbps in June 2019, up 25% since 2018. Mobile speeds in 2019 averaged 23 Mbps, up 7% from 2018.
  • Like the US, China has a rural digital divide. In 2018 the country had 225 million rural Internet users representing a 39% penetration. Urban Internet users were 630 million, a 77% penetration. There are 347 million rural Chinese without access to the Internet, almost 25% of all citizens in the country. It’s hard to compare that statistic to the US since the FCC does such a lousy job of counting households with broadband.
  • China is working to solve the rural digital divide and added 3 million rural Chinese to the Internet in the first half of 2019. However, much like here, that rate of growth is glacial, and at that rate of growth it will take 36 years for the rural population to grow to the same current penetration seen in urban areas.
  • The Chinese are heavy users of instant messaging with 96.5% of Internet users using messaging in 2018.
  • It’s important to remember that Chinese web users are monitored closely and live behind what the west calls the Great Firewall of China. The government tracks how people use broadband, and we don’t have direct statistics for the following:

Watch online video       88.8%

Use online news            80.3%

Shop online                   74.8%

Online bill payment      74.1%

Order meals online       49.3%

Car hailing services       39.4%

  • China’s mobile data traffic is growing even faster than in the US. In the first half of 2018, the Chinese mobile networks carried 266 petabytes of traffic. By the first half of 2019 that traffic had doubled to 554 petabytes. China’s cellular data usage doubled in one year, while here it’s taking two years to double. The numbers are huge, and a petabyte equals 100 billion gigabytes.
  • The average Chinese broadband user spent 27.9 hours online in 2019.
  • The CNNIC tracks why people don’t use the internet. 45% don’t have access to broadband; 37% lack the skills to use broadband; 15% don’t have computers; 11% say they have no need. The interesting thing about the list in China is that nobody said they couldn’t afford Internet access.

There was one interesting thing missing in the Chinese report. There was no mention of 5G. That means, at least to the government agency that tracks broadband usage in China, there is no 5G race. It’s obvious that the Chinese need 5G, probably more badly than here since the volumes of data on their mobile networks are doubling annually. But the topic wasn’t worth a mention in their annual report of the status of broadband.

There is No Artificial Intelligence

It seems like most new technology today comes with a lot of hype. Just a few years ago, the press was full of predictions that we’d be awash with Internet of Thing sensors that would transform the way we live. We’ve heard similar claims for technologies like virtual reality, block chain, and self-driving cars. I’ve written a lot about the massive hype surrounding 5G – in my way of measuring things, there isn’t any 5G in the world yet, but the cellular carriers are loudly proclaiming its everywhere.

The other technology with a hype that nearly equals 5G is artificial intelligence. I see articles every day talking about the ways that artificial intelligence is already changing our world, with predictions about the big changes on the horizon due to AI. A majority of large corporations claim to now be using AI. Unfortunately, this is all hype and there is no artificial intelligence today, just like there is not yet any 5G.

It’s easy to understand what real 5G will be like – it will include the many innovations embedded in the 5G specifications like frequency slicing and dynamic spectrum sharing. We’ll finally have 5G when a half dozen new 5G technologies are on my phone. Defining artificial intelligence is harder because there is no specification for AI. Artificial intelligence will be here when a computer can solve problems in much the way that humans do. Our brains evaluate available data on hand to see if we know enough to solve a problem. If not, we seek the additional data we need. Our brains can consider data from disparate and unrelated sources to solve problems. There is no computer today that is within a light-year of that ability – there are not yet any computers that can ask for specific additional data needed to solve a problem. An AI computer doesn’t need to be self-aware – it just has to be able to ask the questions and seek the right data needed to solve a given problem.

We use computer tools today that get labeled as artificial intelligence such as complex algorithms, machine learning, and deep learning. We’ve paired these techniques with faster and larger computers (such as in data centers) to quickly process vast amounts of data.

One of the techniques we think of artificial intelligence is nothing more than using brute force to process large amounts of data. This is how IBM’s Deep Blue works. It can produce impressive results and shocked the world in 1997 when the computer was able to beat Garry Kasparov, the world chess champion. Since then, the IBM Watson system has beat the best Jeopardy players and is being used to diagnose illnesses. These computers achieve their results through processing vast amounts of data quickly. A chess computer can consider huge numbers of possible moves and put a value on the ones with the best outcome. The Jeopardy computer had massive databases of human knowledge available like Wikipedia and Google search – it looks up the answer to a question faster than a human mind can pull it out of memory.

Much of what is thought of as AI today uses machine learning. Perhaps the easiest way to describe machine learning is with an example. Machine learning uses complex algorithms to analyze and rank data. Netflix uses machine learning to suggest shows that it thinks a given customer will like. Netflix knows what a viewer has already watched. Netflix also knows what millions of others who watch the same shows seem to like, and it looks at what those millions of others watched to make a recommendation. The algorithm is far from perfect because the data set of what any individual viewer has watched is small. I know in my case, I look at the shows recommended for my wife and see all sorts of shows that interest me, but which I am not offered. This highlights one of the problems of machine learning – it can easily be biased and draw wrong conclusions instead of right ones. Netflix’s suggestion algorithm can become a self-fulfilling prophecy unless a viewer makes the effort to look outside of the recommended shows – the more a viewer watches what is suggested, the more they are pigeonholed into a specific type of content.

Deep learning is a form of machine learning that can produce better results by passing data through multiple algorithms. For example, there are numerous forms of English spoken around the world. A customer service bot can begin each conversation in standard English, and then use layered algorithms to analyze the speaker’s dialect to switch to more closely match a given speaker.

I’m not implying that today’s techniques are not worthwhile. They are being used to create numerous automated applications that could not be done otherwise. However, almost every algorithm-based technique in use today will become instantly obsolete when a real AI is created.

I’ve read several experts that predict that we are only a few years away from an AI desert – meaning that we will have milked about all that can be had out of machine learning and deep learning. Developments with those techniques are not leading towards a breakthrough to real AI – machine learning is not part of the evolutionary path to AI. At least for today, both AI and 5G are largely non-existent, and the things passed off as these two technologies are pale versions of the real thing.

California’s New Privacy Law

If you use the web much you noticed a flurry of new privacy notices at the end of last year, either through pop-up notifications when you visited a website or by emails. These notifications were all due to the California Consumer Privacy Act, the new privacy laws that went into effect on January 1.

The law applies to companies that use the web and that have annual revenues over $25 million, companies that buy, sell or collect data on 50,000 or more consumers, and companies of any size that make more than 50% of their revenue by selling customer’s personal information.

The new law has a lot of requirements for web companies operating in California. Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted. Consumers must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data.

The new law defines personal data broadly to include things like name, address, online identifiers, IP addresses, email addresses, purchasing history, geolocation data, audio/video data, biometric data, or any effort made to classify customers by personality type or trends.

The penalties for violating the law are severe. Consumers can sue web companies for up to $2,500 if they don’t offer these options by January 1 and up to $7,500 per violation if a company intentionally violates the law. It’s not too hard to anticipate the class action lawsuits already brewing that will result from this law.

While these new rules only apply to web companies and how they interact with California consumers, many web sites have taken the safe approach and are applying the new rules to everybody. That’s a safe approach because it’s difficult for web companies to always know where a web visitor is from, especially for people who use VPNs to hide their location.

California isn’t the only state with new privacy rules. Washington has new rules that are not as severe as the California ones but that still layer a lot of new requirements onto ISPs. New York is working on a privacy law that is said to be even tougher than the California one.

These state laws are only in place because Congress seems unable to pass a set of federal privacy rules. The issue has been debated over the last two years, and draft bills have been written, but no proposed law has come before the Senate for a vote, so the issue has gone nowhere. People are rightfully concerned that their data is being used and many people want the government to set some guidelines to protect them. The states are filling the legislative void in the absence of federal legislators taking action.

Web companies will face dilemmas with a proliferation of state privacy laws. Do they try to comply only with customers in a given state? What’s most concerning for web companies is that as more states pass privacy laws that some of the laws will inevitably conflict. There is also a big question about how these laws apply to foreign companies. The California law is written to apply to every company interfacing with California consumers. To complicate matters for web companies, European Union privacy rules are also tough and will inevitably conflict with parts of the California rules.

Like all new laws, this new law will be tested in court. The more interesting challenges will be how this law might impact companies from outside California. The $25 million of revenue is a low threshold and there are numerous companies across the country with revenues of that size that have likely done nothing in response to this law. If companies keep even the most rudimentary database of customer information, then theoretically they violate this law if anybody in the database resides in California. There are going to be lawyers trying to make a living from chasing companies that violate the law, and I doubt that it will take long for the lawsuit to surface.

Can 5G Replace WiFi?

Verizon recently posted a webcast with investors where Ronan Dunne, EVP and CEO of the Verizon Consumer Group said that he believed that 5G hotspots using millimeter wave spectrum will eventually displace WiFi in homes.

He cites major benefits of 5G over WiFi. He believes that a 5G network will be more reliable and more secure. He thinks that people will value the safety that comes from having traffic inside their home being encrypted as it rides Verizon’s 5G network compared to the more public nature of WiFi where every neighbor can see a home’s WiFi network.

He also cites the convenience of being able to transfer 5G traffic between networks. He paints a picture where a customer making a call or watching a video using a home 5G hotspot will be able to walk out the door and seamlessly continue the session outside on their cellphone. That’s pretty slick stuff should that ever come to pass.

The picture he’s painting for Verizon investors is a future where homes buy a Verizon 5G subscription to use in place of WiFi. This is part of Verizon’s ongoing effort to find a business case for 5G. His vision of the future is possible, but there are a lot of hurdles for Verizon to overcome to achieve that vision.

It’s going to get harder to compete with WiFi since the technology is getting a lot better with two major upgrades. First, the industry has introduced WiFi 6, which brings higher quality performance, lower latency, and faster data rates. WiFi 6 will use techniques like improved beamforming to greatly reduce interference between WiFi uses within the home.

Even more importantly, WiFi will be incorporating the new 6 GHz spectrum band that will increase bandwidth capabilities by adding seven 160 MHz bands and fourteen 80 MHz bands. It will be much easier to put home devices on separate channels when these new channels are added to the existing channels available on 2.4 and 5 GHz. This means that 5G will be competing against a much improved WiFi compared to the technology we all use today.

Another big hurdle for Verizon to overcome is that WiFi is ubiquitous today. WiFi is built into a huge number of devices, and a homeowner might already own a dozen or more devices capable of using WiFi. Verizon will have to somehow convince homeowners that 5G is so superior that it’s worth replacing the panoply of WiFi devices.

Another hurdle is that there is going to be WiFi vendors painting almost the same picture as Verizon. The makers of WiFi routers are already envisioning future devices that will introduce millimeter-wave spectrum including 5G into the home. There are vendors already working on devices that will provide both WiFi 6 and 5G using millimeter-wave connections simultaneously, using the publicly available 60 GHz V band. These solutions envision offering everything that Verizon can do, except the ability to roam seamlessly in and out of a home – and it will be done by selling a box instead of a new monthly subscription.

Another interesting hurdle to switching home networks to 5G is that there might be separate 5G solutions for each cellular carrier that uses different bands of spectrum. It’s relatively easy for device makers today to build a cellphone or other device that can use different cellular carriers because the carriers all use similar spectrum. But as each cellular company picks a different mix of frequencies moving forward, there is likely going to be cellphones and other devices that are specific to one carrier. It’s impossible to build a cellphone with today’s battery technology that can receive a huge range of spectrums – the multiple antenna systems would drain a cellphone dry in no time.

The largest hurdle of all is that WiFi is free to use after buying a WiFi router or meshed WiFi devices for the home. There is no monthly subscription fee to use the wireless WiFi connections within the home. Verizon clearly foresees a world where every home has a new monthly subscription to use its in-home 5G network.

Mr. Dunne makes one good point. It’s becoming increasingly clear that public WiFi networks are susceptible to hacking. A 5G network controlled by a carrier should be a lot safer than a WiFi hotspot managed by a coffee shop. The big question is if this enough incentive for people to buy 5G-capable devices or for coffee shops to switch to 5G networks. Even should coffee shops go with a 5G solution, will homes follow suit?

Mr. Dunne vision has an underlying assumption that people will value data security enough to be willing to pay more for it. He envisions people choosing a managed network when they have a choice. He could be right, and perhaps there will be enough data breaches in coming years with WiFi that the paradigm will change from WiFi to 5G. But it’s going to be incredibly hard to dislodge WiFi, particularly when it’s evolving and improving along with 5G.

Even if Mr. Dunne is right, this shift is not coming soon, probably not within this decade. For now, WiFi has won the device war and any shift to 5G would drag out over many years. It’s going to be incredibly difficult for the cellular carriers to convince everybody to switch to 5G.

I sympathize with Mr. Dunne’s dilemma. Investors want to understand where the revenues will come from to fund the expensive upgrades to 5G. Verizon and the other cellular carriers have tossed out a lot of ideas, but so far none of them have stuck to the wall.  Investors are getting rightfully nervous since there doesn’t appear to be any significant 5G revenues coming in the next few years. The carriers keep painting pictures of an amazing 5G future as a way to not have to talk about lack of 5G revenues today.

The Fragile Supply Chain

The recent outbreak of the coronavirus reminded us how fragile the supply chain is for telecom. As it turns out, the Hubei province of China is where much of the world’s optics and lasers are built that are the key component in every device that is used to communicate in a fiber network. Within days after the reality of the virus become apparent, the stocks of tech companies that rely on lasers took a hit.

The supply chain for electronics manufacturing stretches worldwide. The lasers are made in one place. The chips in devices are made somewhere else. Other electronic components come from a third geographic source. Components like cellphone screens and other non-electric components come from yet a different place. And the raw materials to make all of these devices come from markets all over the world.

The virus scare made the world wake up to the fragility of the supply chain. Without lasers, there would be no fiber-to-the-home devices manufactured. There would be no new servers in data centers. There would be no new small cell sites built or activated. Major industries could be brought to their knees within weeks.

It’s not hard to understand why I say the supply chain is fragile. Consider smartphones. There are probably a dozen components in a smartphone that must be delivered on time to a smartphone factory to keep the manufacturing process going. If any one of those components can’t be delivered, smartphone manufacturing comes to a halt. The manufacturing floor can be crippled by a lack of screens just as much as it can suffer if the chips, antennas, or other key electronic components become unavailable.

It’s impossible to know if the coronavirus will cause any major disruption in the supply chain for fiber optics – but the point is that it could. If it’s not a virus today, disruptions could come from a wide range of natural disasters and manmade problems. I remember a fire that destroyed a fiber optic cable factory a few decades ago that created a major shortfall of optic cables for a year. Floods, fires, earthquakes, and other disasters can knock out key manufacturing sites.

Manmade disruptions to the supply chain are even easier to imagine. We saw the price of electronics components shoot up over the last year due to tariff battles between the US and China. The supply chain can be quickly cut if the country making devices goes to war, or even undergoes an ugly regime change. It’s also now possible to weaponize the supply chain and threaten to cut off key components when negotiating other issues.

I’m sure that very few Americans realized that the Wuhan region has a near-monopoly on the manufacture of lasers. A worldwide economy rewards the creation of monopolies because components are cheapest when an industry takes the most advantage of the economy of scale. The companies in the Wuhan region can likely manufacture lasers cheaper than anybody else.

From a strategic position, countries like the US should foster their own industries to manufacture vital components. But that’s not easy or practical to achieve. A new US company trying to compete on the world stage by making lasers is likely to be more expensive and unable to compete when the supply chain is humming at normal capacity. It’s hard to picture creating a competitor to the Wuhan region that can manufacture lasers in the quantities, and at a price the market is willing to pay.

In the long run, the world always finds alternate solutions to any permanent changes in the supply chain. For example, if China is ever unable to export lasers, within a few years other countries would pick up the slack. But the fiber industry would be devastated during the lull needed to find a new source of components. Bank of America reported last year that 3,000 major manufacturing companies were already reconsidering their supply chain due to tariff and other concerns. Some of these companies, particularly electronics companies have been considering bringing production back to the US now that factories can be heavily robotized. I’m sure the coronavirus has accelerated these decisions.

 

Will the Big Telcos Pursue RDOF Grants?

One of the most intriguing questions concerning the upcoming $16.4 billion RDOF grant program is if the big telcos are going to participate. I’ve asked the question around the industry and I’ve talked to folks who think the big telcos will fully wade into the reverse auctions, while others think they’ll barely play. We’re not likely to know until the auctions begin.

The big telcos were the full beneficiaries of the original CAF II program when the FCC surprisingly decided to unilaterally award the big telcos the full $9 billion in funding. In that grant program, CenturyLink received over $3 billion, AT&T almost $2.6 billion, Frontier nearly $2 billion, and Windstream over $1 billion. The telcos were supposed to upgrade much of their most rural properties to receive broadband speeds of at least 10/1 Mbps.

CenturyLink and Frontier both recently told the FCC that they are behind in the CAF II build out and didn’t meet their obligation at the end of 2019 to be 80% finished with the upgrades. From what I hear from rural communities, I think the problem is a lot more severe than just the telcos being late. Communities across the country have been telling me that their residents aren’t seeing faster speeds and I think we’re going to eventually find out that a lot of the upgrades aren’t being made.

Regardless of the problems with the original CAF II, the FCC is now offering the $16.4 billion RDOF grant program to cover much of the same areas covered by CAF II. The big telcos are faced with several dilemmas. If they don’t participate, then others are going to get federal assistance to overbuild the traditional big telco service territories. If the big telcos do participate, they have to promise to upgrade to meet the minimum speed obligations of the RDOF of 25/3 Mbps.

Interestingly, the upgrades needed to raise DSL speeds on copper to 25/3 Mbps are not drastically different than the upgrades needed to reach 10/1 Mbps. The upgrades require building fiber deeper into last-mile networks and installing DSL transmitters (DSLAMs) in the field to be within a few miles of subscribers. Fiber must be a little closer to the customer to achieve a speed of 25/3 Mbps rather than 10/1 Mbps – but not drastically closer.

I think the big telcos encountered two problems with the CAF II DSL upgrades. First, they needed to build a lot more fiber than was being funded by CAF II to get fiber within a few miles of every customer. Second, the condition of their rural copper is dreadful and much of it probably won’t support DSL speeds. The big telcos have ignored their rural copper for decades and found themselves unable to coax faster DSL speeds from the old and mistreated copper.

This begs the question of what it even means if the big telcos decide to chase RDOF funding. Throwing more money at their lousy copper is not going to make it perform any better. If they were unable to get 10/1 speeds out of their network, then they are surely going to be unable to get speeds upgraded to 25/3 Mbps.

We can’t ignore that the big telcos have a natural advantage in the RDOF auction. They can file for the money everywhere, and any place where a faster competitor isn’t vying for the money, the big telcos will have a good chance of winning the reverse auction. There are bound to be plenty of places where nobody else bids on RDOF funding, particularly in places like Appalachia where the cost is so high to build, even with grant funding.

It would be a travesty to see any more federal grant money spent to upgrade rural DSL particularly since the FCC already spent $9 billion trying to upgrade the same copper networks. The copper networks everywhere are past their expected useful lives, and the networks operated by the big telcos are in the worst shape. I’ve known many smaller telcos that tried in the past to upgrade to 25/3 on rural DSL and failed – and those companies had networks that were well-maintained and in good condition. It would be impossible to believe the big telcos if they say they can upgrade the most remote homes in the country to 25/3 Mbps speeds. Unfortunately, with the way I read the RDOF rules, there is nothing to stop the big telcos from joining the auction and from taking big chunks of the grant money and then failing again like they did with the original CAF II.

Dwindling TV Viewers

It’s common knowledge that cable operators are losing customers to cord-cutting. The cable industry peaked in 2014 when there were 102 million homes that had a landline cable subscription or satellite cable service. By the end of 2019, the number of homes with traditional cable is approaching 86 million, a 16% decline in customers since 2014.

What’s not as well understood is that even people who are buying a traditional cable service are watching it less as they spend some time watching Netflix and other online programming alternatives. This has resulted in significant losses of viewers for most traditional cable networks. The statistics below are from Variety and show the average daily viewers of the top 20 cable networks in 2019 and five years earlier in 2014. The numbers represent the average daily viewers of each network, in millions.

2019 2014 Change
1 CBS 7.14 9.38 -24%
2 NBC 6.33 8.26 -23%
3 ABC 5.19 6.39 -19%
4 Fox 4.62 5.97 -23%
5 Fox News 2.50 1.75 43%
6 ESPN 1.75 2.21 -21%
7 MSNBC 1.74 0.59 195%
8 ION 1.34 0.28 379%
9 HGTV 1.31 1.36 -4%
10 Univision 1.30 2.97 -56%
11 Hallmark Channel 1.27 0.83 53%
12 USA Network 1.23 2.20 -44%
13 Telemundo 1.20 1.33 -10%
14 History 1.19 1.88 -37%
15 TLC 1.18 1.10 7%
16 TBS 1.16 1.89 -39%
17 Discovery Channel 1.13 1.41 -20%
18 TNT 1.12 2.06 -46%
19 The CW 1.09 1.66 -34%
20 A&E 1.06 1.29 -18%

The loss in viewers for some networks is eye-opening. Most networks have lost daily viewers at a faster pace than the overall industry loss of cable customers. CBS, the most-watched network, lost 2.24 million daily viewers over the last five years. Five networks fell out of the top twenty since 2014 – the Disney Channel, AMC, Adult Swim, FX, and the Food Network.

Daily viewers matter because that’s the prime driver of advertising dollars. Variety reports that the trends deeper inside these numbers reveal that networks that are watched by younger views have the biggest losses, reflecting that the average age of traditional TV viewers has climbed significantly over the last few years. In 2014 the average age of viewers of the major broadcast networks was 54 years old. By 2019 that climbed to 61 years old. Younger people are not watching traditional TV content. It’s no wonder that advertisers are moving to other platforms. In 2019, Facebook had 68 million daily users in the US and Twitter had 27 million.

The loss of viewers directly impacts the revenues of each network since each charges a subscription fee to show their content on a cable network. As viewers have plunged, many of the networks have tried to make up the lost revenues through subscription price increases.

Not every network is losing viewers. The big winners over the last five years are ION and MSNBC that have shot up the chart. A few other networks like Fox News, the Hallmark Channel, and TLC also gained viewers over the last five years. However, the vast majority of cable channels are steadily losing viewers year after year. With cord-cutting still growing and an explosion of new online programming, the loss of viewers is likely to continue and deepen.

 

Kari’s Law

ISPs should be aware of two new laws that went into effect in January. The first is Kari’s law. This law requires that all phone systems sold, leased or installed after February 16, 2020 must be pre-configured so that a user can directly dial 911 with no other dialing steps required. This law is aimed to bring the 911 system into buildings served by PBXs, keysystems, Centrex, and cloud-based VoIP. The intent is to improve 911 for places like hotels, universities, hospitals, and businesses. This law puts the responsibility to comply not only on phone system manufacturers, but also on anybody who installs, manages, or operates a voice system.

The law also creates new requirements on existing phone systems that went into effect on January 6. Phone systems must be configured so that any call placed to 911 will immediately notify a ‘central location’ that a call has been placed to 911. This must be implemented immediately for any existing phone system that can provide the notification without needing a software or hardware upgrade. The FCC believes that a large percentage of phone systems are capable of making such notifications, so those notifications must be activated. It’s worth noting that there is no exemption for small businesses – anybody operating a private phone system is expected to comply with the law. Interestingly, the law applies to outward-calling locations like an outbound call center that can’t receive calls.

The FCC leaves a lot of interpretive room in defining a ‘central location’ for delivering the notification. Their goal is that a notification of a call to 911 must be sent to a location where there is a high likelihood that somebody will see it. The FCC wants 911 centers to be able to contact somebody at a business to gain entrance and to hopefully locate the person that made the 911 call.

Notifications can be in any format including emails, text messages, pop-up notifications, alarms, etc. The new rules also require some ancillary information to be included in the notification, where technically feasible. This includes information like a callback number and as much information as possible about the location of the 911 caller (room number, the wing of building, etc).

To the extent possible this also applies to ‘mobile’ devices that are controlled by a phone system. This might include cordless phones used inside of a business or desksets that can be moved anywhere within the business. Companies are not expected to track commercial cellphones that aren’t on their system or the location of devices that are carried off-site.

The second law that went into effect in January is Ray Baum’s Act. One of the many provisions of this law requires that 911 centers be provided with ‘dispatchable location’ information. In plain English, that means that first responders want to know ‘the right door to kick down’ when responding to a 911 call. This goes into effect concurrently with Kari’s law and means that businesses must provide more information to 911 centers about how to respond to call made from their location.

This new law is also aimed at the same kind of buildings as Kari’s law – places like hotels or a business where a 911 responder doesn’t know how to locate the person that called 911. At a minimum, every call to 911 must convey a validated 911 street address. That’s routine information for calls made from single-family homes, but not necessarily so for a complex business like a hospital or business high-rise complex. If a validated 911 address can be conveyed today it must be done so. Businesses are given one year to implement this change and are expected to coordinate with 911 centers if they want to provide complicated information on room number, building layouts, etc.

The law also requires better reporting for mobile devices that are controlled by a phone system. The rules expect the notification to 911 to include the best information possible about the location of a caller with a mobile device such as a cordless phone. This could be as detailed as a room number or something less accurate such as the location of the nearest WiFi hotspot. Companies have two years to implement this change.

The changes that come from the Roy Baum Act are intended to be coordinated with the nearest 911 PSAP so that they understand the nature and quality of the location data when they get a call. Businesses are expected to notify, and test as needed, to make sure that PSAPs know how to locate callers at a business. The FCC has the ability to fine parties that don’t comply with the law, and so expect a few test cases within the next year when businesses fail to implement the new rules or else fail to convey information to their 911 center.