Bringing Back Communications Etiquette

Ahoy-hoy. That’s the way that Alexander Graham Bell suggested that we should answer the telephone. I’m sure Mr. Bell would be amazed to see that 150 years later that we routinely can talk to each other face-to-face on Zoom. I’ve been recently reading about the early days of telephony and today’s blog is a lighthearted look back at how the public reacted to the newly found ability to electronically reach out to friends and family. I must say that a few of the old etiquette suggestions don’t sound too bad in the day of Zoom calls.

When phones were first introduced, telephone network owners encouraged keeping calls short. In early telephone technology, a phone call required tying up wires between two callers – there was not yet any technology that allowed combining multiple calls to share the same piece of copper reaching between exchanges or cities. A call from New York to San Francisco completely monopolized a copper connection from coast to coast. An early British calling guide suggested that people shorten calls by not saying “hello” or wasting any time on any introductory pleasantries but get straight to the point when calling. I remember doing traffic studies in the 70s when the average hold-time for local calls was only 3 minutes. Tell me the last time you had a 3-minute Zoom call.

The biggest complaint of early operators was that people would walk away from making calls. There was often a significant wait for somebody who wanted to make a long-distance call – operators had to secure a free copper path from end-to-end in the network. After going through all of the work to set up a call, operators often found that the call originator had given up and walked away from the phone. Early phone books admonished callers to not make a call if they didn’t have the time to wait for it to occur.

My favorite early practice is that some early phone books discouraged calling before 9:00 AM or after 9:00 PM. This was partially due to phone companies not wanting to staff too many operators 24 hours per day, but it was also considered impolite to disturb people too early or too late. Many smaller telephone companies simply stopped manning the operator boards during the night.

Telephone calling was such a drastic societal change that phone companies routinely issued calling guides that detailed calling etiquette for using the new telephone contraption. There was obviously no caller ID in the early days, and operators often did not stay on the line to announce a call. Phone etiquette books suggested it was impolite to ask who was calling and that people should guess the identity of the caller rather than ask. Some phone books suggested that anybody answering the phone should tell their telephone number so that a caller would know if they had called the wrong number.

One of my favorite early telephone etiquette suggestions is that people should not use the telephone to invite people to a formal occasion. Something that important should only be done in writing so that the invitee would have all of the details of the invitation in writing.

Phonebooks included diagrams showing that the proper distance to hold the phone from the mouth was 1.5 inches. People were admonished not to shout into the handset. One California phonebook suggested that gentlemen trim mustaches so that they could be clearly heard on the phone.

Of course, at the turn of the twentieth-century foul language was not tolerated. Somebody cursing on the phone and being heard by an operator stood a chance of losing their phone line or even getting a knock on the door by the police. Wouldn’t that concept throw a big wrench in the current first amendment controversies about what is allowable online speech?

Still Waiting for IPv6

It’s now been a decade since the world officially ran out of blocks of IP addresses. In early 2011 the Internet Assigned Numbers Authority (IANA) announced that it had allocated the last block of IPv4 addresses and warned ISPs to start using the new IPv6 addresses. But here we are a decade later and not one of my clients has converted to IPv6.

Networks widely use IP addresses for devices in the network. Every cellphone, computer, network router, and modem is assigned an IP address so that ISPs can route traffic to the right device. The world adopted IPv4 in 1982. This is a 32-bit address and provided almost 4.3 billion IP addresses. That was enough addresses until 2011. IPv6 uses a 128-bit IP address. This provides for 3.4 trillion trillion IP addresses, which ought to carry mankind for centuries to come. Like most of us, I hadn’t thought about this in a long time and recently went to look to see how much of the world has converted to IPv6.

At the end of 2020, around 30% of all web traffic was being routed using IPv6. A lot of the biggest US ISPs have converted to IPv6 inside of networks. At the end of 2020, Comcast had converted 74% of its traffic to IPv6; Charter was at 54%. In the cellular world, both Verizon and AT&T are routing over 80% of traffic on IPv6 while T-Mobile is close to 100%. Around the world, some of the biggest ISPs have converted to IPv6. India leads the world with over 62% countrywide adoption at the end of 2020, with the US in fourth at over 47% adoption.

But the big caveat with the above statistics is that a lot of the big ISPs are using IPv6 inside the networks but are still communicating with the outside world using IPv4. After all of the alarms were sounded in 2011, why haven’t we made the transition?

First, carriers have gotten clever in finding ways to preserve IPv4 IP addresses. For example, small ISPs and corporations are using a single external IP address to identify the entire network. This allows for the assignment of imaginary IP addresses inside the network to reach individual customers and devices, much like CLECs have reduced the number of telephone numbers needed by switching internally with imaginary numbers.

There is an extra cost for any ISP that wants to fully convert to IPv6. IPv6 is not backward compatible with IPv4, and any company that wants to route externally with IPv6 needs to maintain what is called a dual stack, meaning that every transaction in and out of the network has to route using both protocols. This adds expense but more importantly slows down the routing.

It’s also impossible to convert a network to IPv6 until all devices using the network are IPv6 compatible. This becomes less and less of an issue every year, but every ISP network still has customers and devices on the network that are not IPv6 compatible. Those customers still using a 12-year old WiFi router would go dead with a full conversion to IPv6. This is one of the primary reasons that the big ISPs and cellular carriers aren’t at 100% IPv6. There are still a million folks using old flip phones that can’t be addressed with IPv6.

There is a definite cost for not converting to IPv6. There is a grey market for buying IPv4 IP addresses and the cost per IP address has climbed in recent years. The typical price to buy an IPv4 address ranged from $24 to $29 during 2020. With all of the grant money being handed out I expect the creation of a number of new ISPs in the next year. Many of them are going to be surprised that they need to spend that much to get IP addresses.

The main reason that the conversion hasn’t happened is that nobody is pushing it. The world keeps functioning using IPv4 and no ISP feels threatened by not considering the conversion. The first small ISPs that take the plunge to IPv6 will pay the price of being first with the technology – and nobody wants to be that guinea pig. Network purists everywhere are somewhat disgusted that their employers won’t take the big plunge – but even a decade after we ran out of IP numbers, it’s still not the right time to tackle the conversion.

I have no idea what will finally set off a rush to convert because it inevitably will happen. But until then, this will be a topic that you’ll barely hear about.

Insuring Fiber Networks

A few times each year I get the question of where a new ISP should go to get insurance for a fiber network. The question comes from new ISPs that are hoping that they can buy insurance that will compensate them for catastrophic damage to a new fiber network.

Everybody in the industry knows about many examples of catastrophic fiber damage. Fiber networks along the coasts can be devastated by hurricanes. The second biggest cause of network damage is probably ice storms, which can knock down wires across huge geographic areas. We’ve seen networks damaged by heavy floods. Networks are sometimes hit by tornadoes. In this last year we have huge examples of poles and fiber networks destroyed in the huge forest fires in the west, and occasionally in Appalachia.

The damage can be monumental. We saw hurricanes a few years ago that broke every utility pole in the Virgin Islands. We’ve seen a few towns along the Gulf coast be leveled by hurricanes. The fires last year burnt large swaths of forests and utility poles and burnt and melted all of the wires.

Asking for insurance against such damages sounds like a sensible question since we seem to be able to buy insurance for almost anything. But the bad news for those looking for insurance for a fiber (or electric) network is that such insurance doesn’t exist – at least not in any affordable form.

This mystifies people who wonder why they can buy insurance to protect a $50 million building but not a $50 million fiber network. The answer is that a building owner can take multiple steps to protect a building. For example, an insurer might insist on a sprinkler system throughout a building to protect against fire damage. But there is nothing that a fiber network owner can do to brace against the ravages of mother nature. A large network can be badly damaged at any time and can be hit multiple times. In just the last few years, the City of Ruston, Louisiana was hit by a devastating tornado and then two subsequent hurricanes.

So how do owners get compensated after major network damage? The answer is FEMA. When there has been bad damage from the natural disasters listed earlier, a Governor and President can declare an emergency, and this unleashes state and federal funds to help pay to fix the damages. People often wonder about the size of federal funding after a disaster – the government isn’t only helping to fix destroyed roofs after a hurricane, but also the telco, cable, and power networks.

If you press an insurance company hard enough you can get damage insurance for fiber. I had a client who won an RFP to build fiber for a rural school, and the school insisted that the network be insured. Even after providing evidence that this is not a normal insurance policy, the school system insisted. My client bought a 2-year insurance damage policy for the newly built fiber that was priced at almost 20% of the cost of the fiber.

I remember when Fire Island in New York went without broadband and cellular coverage for well over a year after Hurricane Sandy while Verizon, the New York Public Service Commission, and FEMA argued about how the network was to be rebuilt. It’s far better to protect against catastrophic damage whenever possible. I have clients in storm-prone areas that have paid the extra cost to bury fiber networks. I have clients in flood zones that place electronics huts on stilts. A lot of ISPs work hard to make sure that trees stay trimmed to reduce ice damage. These ISPs know that not taking these extra precautions means the network is likely to get damaged and go out of service. There is nothing more satisfactory than having a fiber network that keeps humming along during and after a big storm. Unfortunately, mother nature often has different plans.

The Slow Death of Satellite TV?

There has been rumors for years about merging Dish Networks and Direct TV to try to gain as much market synergy as possible for the two sinking businesses. It’s hard to label these companies as failures just yet because between two companies collectively still had 21.8 million customers at the end of 2020 (DirectTV 13.0 million, Dish 8.8 million). This makes the two companies collectively the largest provider of cable TV, with Comcast at 19.8 million and Charter at 16.2 million.

But both companies have been bleeding customers in the last few years. In 2020, DirecTV lost over 3 million customers and Dish Networks lost nearly 600,000. Together, the two companies lost 14% of customers in 2020. This is not unusual in the industry when we saw Comcast lose 1.4 million cable customers during the same year.

Dish Networks CEO Charlie Ergen has been predicting for years that a merger of the two companies is inevitable. The two companies could save money on infrastructure and overheads to prop up the combined businesses.

There are a number of factors that make a merger complicated. AT&T divested 30% of DirecTV earlier this year to TPG Capital. That included TV offered by DirecTV, U-Verse, and AT&T TV.

Probably the biggest long-term trend that bodes poorly for satellite TV is the federal government’s push to bring better broadband to rural America. Selling TV to customers with poor broadband is still the sweet spot for the two companies. As the number of homes with good broadband rises, the prospects for satellite TV sinks.

My firm has been doing community surveys for twenty years and we’ve noticed a big change in satellite TV penetrations. A decade ago, I expected to find a 15% market share of satellite TV in almost any town that we surveyed. But in the last few years, people in towns appear to be the ones that have bailed on satellite TV. It’s rare for us to find more than a few percent of households in towns who now buying satellite TV. Households have moved to the web to find video content, with the big losers being satellite TV and landline cable companies.

I also notice the same thing in traveling around the country. It used to be that you’d see satellite dishes peppered in every neighborhood. But I’ve noticed that satellite dishes are becoming a rarity. I know from walking in my neighborhood that only one house still has satellite TV. Just a few years ago there were many more.

Finally, these two companies are both saddled with the ever-increasing programming costs that have plagued the whole industry. Cable customers everywhere have rate fatigue as prices are increased every year to account for higher programming costs. Satellite TV is like the rest if the industry and is pricing itself out of the budget range for the average household.

The two companies are also each saddled with a lot of current debt. Craig Moffett, of MoffettNathanson recently estimated that the combined companies might not have a valuation of more than $1 billion – a bad harbinger for a merger.

It’s hard to picture any investor group that would want to back this merger. The whole idea behind a merger is that the combined company is worth more than the individual pieces. But even if the combined satellite companies were able to cut costs with a merger, it seems likely that any savings would quickly get subsumed by continued customer losses.

It’s not unrealistic to think that a decade from now that this industry will disappear. Maybe the companies can hang on longer even as the number of customers continues to drop – but the math of doing so doesn’t bode well. The end of the satellite TV industry would feel odd to me. I witnessed the meteoric growth of the industry and watched satellite dishes popping up everywhere in the US. Satellite TV could fall into the category of huge tech industries that popped into existence, grew, and then died within our adult lifetime. I’m betting that we’re not far off from the day when kids will have no idea what a satellite dish is, just as they now stare perplexed at dial telephones.

To 5.5G and Beyond

I recently saw an article in FierceWireless that reports that Huawei thinks we are going to need an intermediate step between 5G and 6G, something like 5.5G. To me, this raises the more immediate question about why we are not talking about the steps between 4G and 5G?

The wireless industry used to tell the truth about cellular technology. You don’t need to take my word for it – search Google for 3.5G and you’ll find mountains of articles from 2010 to 2015 that talked about 3.5G as an important intermediate step between 3G and 4G. It was clearly understood that it would take a decade to implement all of the specifications that defined 4G, and industry experts, manufacturers, and engineers all regularly debated about the level of 4G implementation. Few people realize that we didn’t have the first fully 4G compliant cell site until late 2018. Up until then, everything that was called 4G was something a little less than 4G. Interestingly, we debated the difference between 3.1G and 3.2G, but once the industry hit what might be considered as 3.5G, the chatter stopped, and the industry leaped to labeling everything as 4G.

That same industry hype that didn’t want to talk about 3.8G has remained intact, and somehow magically, we leaped to calling the next generation technology 5G before even one of the new 5G technologies has been implemented in the network. All we’ve done so far is to layer on new spectrum bands onto 4G phones and labeled that as 5G. These new spectrum bands require phones that can receive the new frequencies, which phone manufacturers gleefully label as 5G phones. I’m not convinced that we are even yet at 4.1G and yet the industry has fully endorsed labeling the first baby steps towards 5G as if we have full 5G.

I have to laugh to see articles already talking about what comes next after 5G. It’s like already picking the best marketing names for the self-driving hovercars that will be replacing regular self-driving cars. We are only partway down the path of implementing self-driving cars that people are ready to buy and trust. The government wouldn’t let a car manufacturer falsely declare it has a fully-self driving car – but we seem to have no problem allowing cellular companies to pronounce having 5G technology that doesn’t yet exist.

Back to the article about 6G. Huawei suggests that 5.5G would be 10 times faster than the current 5G specification and with lower latency. Unfortunately for this suggestion, we just suffered through a whole year of Verizon TV ads showing cellphones achieving gigabit plus speeds. It’s almost as if Huawei hasn’t seen the Verizon commercials and doesn’t know that the US already has 5.5G. I’m thrilled to be the first one to report that the US has already won the 5.5G race!

But it’s also somewhat ludicrous to be talking about 5.5G as an intermediate step on the way to 6G. The next generation of wireless technology we’re labeling as 6G will use terahertz spectrum. The wavelengths of those frequencies are so small that a beam of terahertz frequency beamed from a cellular tower will dissipate before it hits the ground. Even so, the technology holds out a lot of promise for providing extremely high bandwidth for indoor communications. But faster 5G is not an intermediate spot between today’s cellular technology and terahertz-based technology.

Interestingly, there could have been an intermediate step. We still have a long way to go to harness millimeter-wave spectrum in the wild. These frequencies require pure line-of-sight and pass through virtually nothing. I would expect over the next decade or two that lab scientists will find much better ways to propagate and use millimeter-wave spectrum.

But the cellular industry already claims it has solved all of the issues with millimeter-wave spectrum and already claim it as part of today’s 5G solution. It’s going to be anticlimactic when scientists announce breakthroughs in ways to use millimeter-wave spectrum that the cellular industry has already been claiming. Using millimeter-wave spectrum to its fullest capability could have been 5.5G. I can’t wait to see what the industry claims instead.

Reporting the Broadband Floor

I want to start by giving a big thanks to Deb Socia for today’s blog. I wrote a recent blog about the upcoming public reporting process for the FCC maps. In that blog, I noted that ISPs are going to be able to continue to report marketing speeds in the new FCC mapping. An ISP that may be delivering 3 Mbps download will continue to be able to report broadband speeds of 25/3 Mbps as long as that is marketed to the public. This practice of allowing marketing speeds that are far faster than actual speeds has resulted in a massive overstatement of broadband availability. This is the number one reason why the FCC badly undercounts the number of homes that can’t get broadband. The FCC literally encourages ISPs to overstate the broadband product being delivered.

In my Twitter feed for this blog, Deb posted a brilliant suggestion, “ISPs need to identify the floor instead of the potential ceiling. Instead of ‘up to’ speeds, how about we say ‘at least’”.

This simple change would force some honesty into FCC reporting. This idea makes sense for many reasons. We have to stop pretending that every home receives the same broadband speed. The speed delivered to customers by many broadband technologies varies by distance. Telco DSL speeds get noticeably slower the further they are transmitted. The fixed wireless broadband delivered by WISPs loses speed with distance from the transmitting tower. The fixed cellular broadband that the big cellular companies are now pushing has the same characteristic – speeds drop quickly with the distance from the cellular tower.

It’s a real challenge for an ISPs using any of these technologies to pick a representative speed to advertise to customers – but customers want to know a speed number. DSL may be able to deliver 25/3 Mbps for a home that’s within a quarter-mile of a rural DSLAM. But a customer eight miles away might be lucky to see 1 Mbps. A WISP might be able to deliver 100 Mbps download speeds within the first mile from a tower, but the WISP might be willing to sell to a home that’s 10 miles away and deliver 3 Mbps for the same price. The same is true for the fixed cellular data plans recently being pushed by A&T, Verizon, and T-Mobile. Customers who live close to a cell tower might see 50 Mbps broadband, but customers further away are going to see a tiny fraction of that number.

The ISPs all know the limitations of their technology, but the FCC has never tried to acknowledge how technologies behave in real markets. The FCC mapping rules treat each of these technologies as if the speed is the same for every customer. Any mapping system that doesn’t recognize the distance issue is going to mostly be a huge fiction.

Deb suggests that ISPs must report the slowest speed they are likely to deliver. I want to be fair to ISPs and I suggest they report both the minimum “at least” speed and the maximum “up to” speed. Those two numbers will tell the right story to the public because together they provide the range of speeds being delivered in a given Census. With the FCC’s new portal for customer input, the public could weigh in on the “at least” speeds. If a customer is receiving speeds slower than the “at least” speeds, then, after investigation, the ISP would be required to lower that number in its reporting.

This dual reporting will also allow quality ISPs to distinguish themselves from ISPs that cut corners. If a WISP only sells service to customers within 5 or 6 miles of a transmitter, then the difference between its “at least” speeds and its “up to” speeds would be small. But if another WISP is willing to sell a crappy broadband product a dozen miles from the transmitter, there would be a big difference between its two numbers. If this is reported honestly, the public will be able to distinguish between these two WISPs.

This dual reporting of speeds would also highlight the great technologies – a fiber network is going to have a gigabit “at least” and “up to” speed. This dual reporting will end the argument that fixed wireless is a pure substitute for fiber – which it clearly is not. Let the two speeds tell the real story for every ISP in the place of marketing hype.

I’ve been trying for years to find a way to make the FCC broadband maps meaningful. I think this is it. I’ve never asked this before, but everybody should forward this blog to the FCC Commissioners and politicians. This is an idea that can bring some meaningful honesty into the FCC broadband maps.

Rural Redundancy

This short article details how a burning tree cut off fiber optic access for six small towns in Western Massachusetts. This included Ashfield, Colrain, Cummington, Heath, Plainfield, and Rowe. I not writing about this today because this fiber cut was extraordinary, but because it’s unfortunately very ordinary and usual. There are fiber cuts every day that isolate communities by cutting Internet access.

It’s not hard to understand why this happens in rural America. In much of the country, the fiber backbone lines that support Internet access to rural towns use the same routes that were built years ago to support telephone service. The telephone network is configured using a hub and spoke, and all of the towns in a region have a single fiber line into a single central tandem switch that was the historic focal point for regional telephone switching.

Unfortunately, a hub and spoke network (which resembles the spokes of a wagon wheel) does not have any redundancy. Each little town or clusters of towns typically had a single path to reach the telephone tandem – and today to reach the Internet.

The problem is that an outage that historically would have interrupted telephone service now interrupts broadband. This one cut in Massachusetts is a perfect example of how reliant we’ve become on broadband. Many businesses shut down completely without broadband. Businesses take orders and connect with customers in the cloud. Credit card processing happens remotely in the cloud. Businesses are often connected to distant corporate servers that provide everything from software connectivity to voice over IP. A broadband outage cuts off students taking classes from home and adults working from home. An Internet outage cripples most work-from-home people who work for distant employers. A fiber cut in a rural area can also cripple cell service if the cellular carriers use the same fiber routes.

The bad news is that nobody is trying to fix the problem. The existing rural fiber routes are likely owned by the incumbent telephone companies and they are not interested in spending money to create redundancy. Redundancy in the fiber world means having a second fiber route into an area so that the Internet doesn’t go dead if the primary fiber is cut. One of the easiest ways to picture a redundant solution is to picture a ring of fiber that would be equivalent to the rim of the wagon wheel. This fiber would connect all of the ‘spokes’ and provide am alternate route for Internet traffic.

To make things worse, the fiber lines reaching into rural America are aging. These were some of the earliest fiber routes built in the US, and fiber built in the 1980s was not functionally as good as modern fiber. Some of these fibers are already starting to die. We’re going to be faced eventually with the scenario of fiber lines like the one referenced in this article dying, and possibly not being replaced. A telco could use a dying fiber line as a reason to finally walk away from obsolete copper DSL in a region and refuse to repair a dying fiber line. That could isolate small communities for months or even a few years until somebody found the funding to replace the fiber route.

There have been regions that have tackled the redundancy issue. I wrote a blog last year about Project Thor in northwest Colorado where communities banded together to create the needed redundant fiber routes. These communities immediately connected critical infrastructure like hospitals to the redundant fiber and over time will move to protect more and more Internet traffic in the communities from routine and crippling fiber cuts.

This is a problem that communities are going to have to solve on their own. This is not made easier by the current fixation of only using grants to build last-mile connectivity and not middle-mile fiber. All of the last mile fiber in the world is useless if a community can’t reach the Internet.

Big Funding for Libraries

North Asheville Library

The $1.9 trillion American Rescue Plan Act (ARPA) includes a lot of interesting pockets of funding that are easy to miss due to the breadth of the Act. The Act quietly allocates significant funding to public libraries, which have been hit hard during the pandemic.

The ARPA first allocates $200 million to the Institute of Museum and Library Services. This is an independent federal agency that provides grant funding for libraries and museums. $178 million of the $200 million will be distributed through the states to libraries. Each state is guaranteed to get at least $2 million, with the rest distributed based upon population. This is by far the largest federal grant ever made directly for libraries.

Libraries are also eligible to apply to the $7.172 billion Emergency Connectivity Fund that the ARPA is funding through the FCC’s E-Rate program. This program can be used to compensate for hotspots, modems, routers, laptops, and other devices that can be lent to students and library patrons to provide broadband.

The ARPA also includes $360 billion in funding that will go 60% to states and 40% directly to local governments and tribal governments. Among other things, this funding is aimed at offsetting cuts made during the pandemic to public health, safety, education, and library programs.

There is another $130 billion aimed at offsetting the costs associated with reopening K-12 schools to be used for hiring staff, reducing class sizes, and addressing student needs. The funds can also be invested in technology support for distance learning, including 20% that can be used to address learning loss during the pandemic. This funding will flow through the Department of Education based upon Title I funding that supports schools based upon the level of poverty.

Another $135 million will be flowed through the National Endowment for the Arts and Humanities to support state and regional arts and humanities agencies. At least 60% of this funding is designated for grants to libraries.

There is also tangential funding that could support libraries. This includes $39 billion for Child Care and Development Block Grants and Stabilization Fund plus $1 billion for Head Start that might involve partnerships with schools and libraries. There is also $9.1 billion to states and $21.9 billion for local programs for afterschool and summer programs to help students catch back up from what was a lost school year for many.

It’s good to see this funding flow to libraries. Many people may not understand the role that libraries play in many communities as the provider of broadband and technology for people that can’t afford home broadband. Libraries have struggled to maintain this role through the pandemic and the restrictions of not allowing patrons into libraries. Libraries in many communities have become the focal point for the distribution of broadband devices during the pandemic.

One of the lessons that the pandemic has taught us is that we need to connect everybody to broadband. As hard as the pandemic has been on everybody, it’s been particularly hard on those that couldn’t connect during the pandemic. This continues today as many states have established vaccine portals completely online.

Communities everywhere owe a big thanks to librarians for the work they’ve done in the last year to keep our communities connected. When you get a chance, give an elbow bump to your local librarian.

Investing in Rural Broadband

There was a headline in a recent FierceTelecom article that I thought I’d never see –  Jeffries analyst says rural broadband market is ripe for investment. In the article, analyst George Notter is quoted talking about how hot rural broadband is as an investment. He cites the large companies that have been making noise about investing in rural broadband.

Of course, that investment relies on getting significant rural grants. We’ve seen the likes of Charter, Frontier, CenturyLink, Windstream, and others win grants in the recent RDOF reverse auction. I have municipal clients who are having serious discussions with other large incumbents about partnering – when these incumbents wouldn’t return a call a year ago. It’s amazing how billions of dollars of federal grants can quickly change the market. Practically every large carrier in the country is looking at the upcoming broadband grants as a one-time opportunity to build broadband networks cheaply.

This is a seismic change for the industry. Dozens of subdivisions with losuy broadband have contacted me over the years wondering how to get the interest of the nearby cable incumbent. We’ve just gone through a decade when there has been little expansion by the cable companies in terms of footprint. In many cases the reluctance for a cable company to build only a few miles of fiber to reach a community of several hundred homes has been puzzling – these subdivisions often look like a good business opportunity to me. The first carrier to build broadband in such areas is likely to get 70% to 90% of the households as customers almost immediately.

The analyst mentioned the newly found interest in rural broadband from the cellular carriers. It’s been a mystery for me over the last decade why AT&T, Verizon, and others didn’t take advantage of rural cellular towers to get new broadband customers. There are a lot of places in rural America where cellular broadband has been superior to rural DSL and satellite broadband. It’s odd to finally see these carriers want to build now, at a time when people are hoping for technologies that are faster than cellular broadband. The cellular carriers instead have poisoned the rural market by selling cellular hotspot plans with tiny data caps. I heard numerous stories during the pandemic of families spending $500 to $1,000 per month on a hotspot – with the alternative being throttled to dial-up speeds after hitting the small data caps. These customers are never going back to the cellular carriers if they get a different option.

Some of the sudden expansion of the big companies mystifies me. For example, Charter won $1.2 billion in the RDOF to expand into rural areas. The company is matching this with $3.8 billion of its own money. That means Charter is building rural broadband with a 24% federal grant. I’ve studied some of these same grant areas and I couldn’t see a way to build these rural communities without grants of at least 50% of the cost of construction. The RDOF might make sense when Charter is building to areas that are directly adjacent to an existing market. But Charter took grants in counties where it doesn’t have an existing customer. This makes me wonder how much the company is going to eventually like what it has bitten off. I’m betting we won’t see articles talking about rural investment opportunities after a few big ISPs bungle the expansion into rural areas.

When talking about how rural properties are good investments due to grant money, I always wonder if the companies thinking about this are considering the extra operational costs in rural areas. Truck rolls are a lot longer than in an urban market. There are a lot of miles of cable plant that are subject to being cut. Before the pandemic, 16% of states and 35% of counties had a sustained population decrease. Even with grant funding, many rural communities are sparsely populated and often suffer from low household incomes. Even with grant funding, it’s hard to see an ISP doing much better than break even in many rural communities – something cooperatives and municipalties are willing to undertake but which is poison for publicly traded corporations.

Unfortunately, I think I know at least some of the reasons why some companies are attracted to the grants. The big telcos have been cutting the workforce and curtailing maintenance costs and efforts for decades. It’s a lot easier to make money with a grant-funded rural market if a carrier already plans to scrimp on needed maintenance expenditures. To me, that’s the subtle message not mentioned in the Jeffries’s analyst opinion – too many big carriers know how to milk grant money to gain a financial advantage. Unfortunately, those kinds of investors are going to do more long-term harm than good in rural America.

AT&T Says No to Symmetrical Broadband

Since it seems obvious that the new FCC will take a hard look at the definition of broadband, we can expect big ISPs to start the lobbying effort to persuade the FCC to make any increase in the definition as painless as possible. The large ISPs seem to have abandoned any support for the existing definition of 25/3 Mbps because they know sticking with it gets them laughed out of the room. But many ISPs are worried that a fast definition of broadband will bypass their technologies – any technology that can’t meet a revised definition of broadband will not be eligible for future federal grants, and even more importantly can be overbuilt by federal grant recipients.

AT&T recently took the first shot I’ve seen in the speed definition battle. Joan March, the Executive VP of Federal Regulatory Relations wrote a recent blog that argues against using symmetrical speeds in the definition of bandwidth. AT&T is an interesting ISP because the company operates three different technologies. In urban and suburban areas AT&T has built fiber to pass over 14 million homes and businesses and says they are going to pass up to 3 million more over the next year or two. The fiber technology offers at least a symmetrical gigabit product. AT&T is also still a huge provider of DSL, but the company stopped installing DSL customers in October of last year. AT&T’s rural DSL has speeds far south of the FCC’s 25/3 definition of bandwidth, although U-verse DSL in larger towns has download speeds as fast as 50 Mbps.

The broadband product that prompted the blog is AT&T’s rural cellular product. This is the company’s replacement for DSL, and AT&T doesn’t want the FCC to declare the product as something less than broadband. AT&T rightfully needs to worry about this product not meeting the FCC definition of broadband – because in a lot of places it is slower than 25/3 Mbps.

Reviews.org looks at over one million cellular data connections per year and calculates the average data speeds for the 3 big cellular carriers. The report for early 2021 shows the following nationwide average speeds for cellular data. These speeds just barely qualify as broadband with the current 25/3 definition.

AT&T – 29.9 Mbps download, 9.4 Mbps upload

T-Mobile – 32.7 Mbps download, 12.9 Mbps upload

Verizon – 32.2 Mbps download, 10.0 Mbps upload

PC Magazine tests cellular speeds in 26 major cities each summer. In the summer of 2020, they showed the following speeds:

AT&T – 103.1 Mbps download, 19.3 Mbps upload

T-Mobile – 74.0 Mbps download, 25.8 Mbps upload

Verizon – 105.1 Mbps download, 21.6 Mbps upload

Cellular data speeds are faster in cities for several reasons. First, there are more cell sites in cities. The data speed a customer receives on cellular is largely a function of how far the customer is from a cell site, and in cities, most customers are within a mile of the closest cell site. The cellular carriers have also introduced additional bands of spectrum in urban areas that are not being used outside cities. The biggest boost to the AT&T and Verizon urban speeds comes from the deployment of millimeter-wave cellular hotspots in small areas of the downtowns in big cities – a product that doesn’t use traditional cell sites, but which helps to increase the average speeds.

Comparing the urban speeds to the average speeds tells us that rural speeds are even slower than the averages. In rural areas, cellular customers are generally a lot more than one mile from a cell tower, which really reduces speeds. My firm does speed tests, and I’ve never seen a rural fixed cellular broadband product with a download speed greater than 20 Mbps, and many are a lot slower.

The AT&T blog never makes a specific recommendation of what the speeds ought to be. But Marsh hints at a new definition at 50/10 or 100/20. My firm has also done a lot of surveys during the pandemic and we routinely see about one-third of households or more that are unhappy with the upload speeds on urban cable company networks – which have typical upload speeds between 15 Mbps and 20 Mbps. AT&T is hoping that the FCC defines broadband with an upload speed of 10-20 Mbps – a speed that many homes already find inadequate today. That’s the only way that rural fixed cellular can qualify as broadband.