Bringing Back Communications Etiquette

Ahoy-hoy. That’s the way that Alexander Graham Bell suggested that we should answer the telephone. I’m sure Mr. Bell would be amazed to see that 150 years later that we routinely can talk to each other face-to-face on Zoom. I’ve been recently reading about the early days of telephony and today’s blog is a lighthearted look back at how the public reacted to the newly found ability to electronically reach out to friends and family. I must say that a few of the old etiquette suggestions don’t sound too bad in the day of Zoom calls.

When phones were first introduced, telephone network owners encouraged keeping calls short. In early telephone technology, a phone call required tying up wires between two callers – there was not yet any technology that allowed combining multiple calls to share the same piece of copper reaching between exchanges or cities. A call from New York to San Francisco completely monopolized a copper connection from coast to coast. An early British calling guide suggested that people shorten calls by not saying “hello” or wasting any time on any introductory pleasantries but get straight to the point when calling. I remember doing traffic studies in the 70s when the average hold-time for local calls was only 3 minutes. Tell me the last time you had a 3-minute Zoom call.

The biggest complaint of early operators was that people would walk away from making calls. There was often a significant wait for somebody who wanted to make a long-distance call – operators had to secure a free copper path from end-to-end in the network. After going through all of the work to set up a call, operators often found that the call originator had given up and walked away from the phone. Early phone books admonished callers to not make a call if they didn’t have the time to wait for it to occur.

My favorite early practice is that some early phone books discouraged calling before 9:00 AM or after 9:00 PM. This was partially due to phone companies not wanting to staff too many operators 24 hours per day, but it was also considered impolite to disturb people too early or too late. Many smaller telephone companies simply stopped manning the operator boards during the night.

Telephone calling was such a drastic societal change that phone companies routinely issued calling guides that detailed calling etiquette for using the new telephone contraption. There was obviously no caller ID in the early days, and operators often did not stay on the line to announce a call. Phone etiquette books suggested it was impolite to ask who was calling and that people should guess the identity of the caller rather than ask. Some phone books suggested that anybody answering the phone should tell their telephone number so that a caller would know if they had called the wrong number.

One of my favorite early telephone etiquette suggestions is that people should not use the telephone to invite people to a formal occasion. Something that important should only be done in writing so that the invitee would have all of the details of the invitation in writing.

Phonebooks included diagrams showing that the proper distance to hold the phone from the mouth was 1.5 inches. People were admonished not to shout into the handset. One California phonebook suggested that gentlemen trim mustaches so that they could be clearly heard on the phone.

Of course, at the turn of the twentieth-century foul language was not tolerated. Somebody cursing on the phone and being heard by an operator stood a chance of losing their phone line or even getting a knock on the door by the police. Wouldn’t that concept throw a big wrench in the current first amendment controversies about what is allowable online speech?

Still Waiting for IPv6

It’s now been a decade since the world officially ran out of blocks of IP addresses. In early 2011 the Internet Assigned Numbers Authority (IANA) announced that it had allocated the last block of IPv4 addresses and warned ISPs to start using the new IPv6 addresses. But here we are a decade later and not one of my clients has converted to IPv6.

Networks widely use IP addresses for devices in the network. Every cellphone, computer, network router, and modem is assigned an IP address so that ISPs can route traffic to the right device. The world adopted IPv4 in 1982. This is a 32-bit address and provided almost 4.3 billion IP addresses. That was enough addresses until 2011. IPv6 uses a 128-bit IP address. This provides for 3.4 trillion trillion IP addresses, which ought to carry mankind for centuries to come. Like most of us, I hadn’t thought about this in a long time and recently went to look to see how much of the world has converted to IPv6.

At the end of 2020, around 30% of all web traffic was being routed using IPv6. A lot of the biggest US ISPs have converted to IPv6 inside of networks. At the end of 2020, Comcast had converted 74% of its traffic to IPv6; Charter was at 54%. In the cellular world, both Verizon and AT&T are routing over 80% of traffic on IPv6 while T-Mobile is close to 100%. Around the world, some of the biggest ISPs have converted to IPv6. India leads the world with over 62% countrywide adoption at the end of 2020, with the US in fourth at over 47% adoption.

But the big caveat with the above statistics is that a lot of the big ISPs are using IPv6 inside the networks but are still communicating with the outside world using IPv4. After all of the alarms were sounded in 2011, why haven’t we made the transition?

First, carriers have gotten clever in finding ways to preserve IPv4 IP addresses. For example, small ISPs and corporations are using a single external IP address to identify the entire network. This allows for the assignment of imaginary IP addresses inside the network to reach individual customers and devices, much like CLECs have reduced the number of telephone numbers needed by switching internally with imaginary numbers.

There is an extra cost for any ISP that wants to fully convert to IPv6. IPv6 is not backward compatible with IPv4, and any company that wants to route externally with IPv6 needs to maintain what is called a dual stack, meaning that every transaction in and out of the network has to route using both protocols. This adds expense but more importantly slows down the routing.

It’s also impossible to convert a network to IPv6 until all devices using the network are IPv6 compatible. This becomes less and less of an issue every year, but every ISP network still has customers and devices on the network that are not IPv6 compatible. Those customers still using a 12-year old WiFi router would go dead with a full conversion to IPv6. This is one of the primary reasons that the big ISPs and cellular carriers aren’t at 100% IPv6. There are still a million folks using old flip phones that can’t be addressed with IPv6.

There is a definite cost for not converting to IPv6. There is a grey market for buying IPv4 IP addresses and the cost per IP address has climbed in recent years. The typical price to buy an IPv4 address ranged from $24 to $29 during 2020. With all of the grant money being handed out I expect the creation of a number of new ISPs in the next year. Many of them are going to be surprised that they need to spend that much to get IP addresses.

The main reason that the conversion hasn’t happened is that nobody is pushing it. The world keeps functioning using IPv4 and no ISP feels threatened by not considering the conversion. The first small ISPs that take the plunge to IPv6 will pay the price of being first with the technology – and nobody wants to be that guinea pig. Network purists everywhere are somewhat disgusted that their employers won’t take the big plunge – but even a decade after we ran out of IP numbers, it’s still not the right time to tackle the conversion.

I have no idea what will finally set off a rush to convert because it inevitably will happen. But until then, this will be a topic that you’ll barely hear about.

Insuring Fiber Networks

A few times each year I get the question of where a new ISP should go to get insurance for a fiber network. The question comes from new ISPs that are hoping that they can buy insurance that will compensate them for catastrophic damage to a new fiber network.

Everybody in the industry knows about many examples of catastrophic fiber damage. Fiber networks along the coasts can be devastated by hurricanes. The second biggest cause of network damage is probably ice storms, which can knock down wires across huge geographic areas. We’ve seen networks damaged by heavy floods. Networks are sometimes hit by tornadoes. In this last year we have huge examples of poles and fiber networks destroyed in the huge forest fires in the west, and occasionally in Appalachia.

The damage can be monumental. We saw hurricanes a few years ago that broke every utility pole in the Virgin Islands. We’ve seen a few towns along the Gulf coast be leveled by hurricanes. The fires last year burnt large swaths of forests and utility poles and burnt and melted all of the wires.

Asking for insurance against such damages sounds like a sensible question since we seem to be able to buy insurance for almost anything. But the bad news for those looking for insurance for a fiber (or electric) network is that such insurance doesn’t exist – at least not in any affordable form.

This mystifies people who wonder why they can buy insurance to protect a $50 million building but not a $50 million fiber network. The answer is that a building owner can take multiple steps to protect a building. For example, an insurer might insist on a sprinkler system throughout a building to protect against fire damage. But there is nothing that a fiber network owner can do to brace against the ravages of mother nature. A large network can be badly damaged at any time and can be hit multiple times. In just the last few years, the City of Ruston, Louisiana was hit by a devastating tornado and then two subsequent hurricanes.

So how do owners get compensated after major network damage? The answer is FEMA. When there has been bad damage from the natural disasters listed earlier, a Governor and President can declare an emergency, and this unleashes state and federal funds to help pay to fix the damages. People often wonder about the size of federal funding after a disaster – the government isn’t only helping to fix destroyed roofs after a hurricane, but also the telco, cable, and power networks.

If you press an insurance company hard enough you can get damage insurance for fiber. I had a client who won an RFP to build fiber for a rural school, and the school insisted that the network be insured. Even after providing evidence that this is not a normal insurance policy, the school system insisted. My client bought a 2-year insurance damage policy for the newly built fiber that was priced at almost 20% of the cost of the fiber.

I remember when Fire Island in New York went without broadband and cellular coverage for well over a year after Hurricane Sandy while Verizon, the New York Public Service Commission, and FEMA argued about how the network was to be rebuilt. It’s far better to protect against catastrophic damage whenever possible. I have clients in storm-prone areas that have paid the extra cost to bury fiber networks. I have clients in flood zones that place electronics huts on stilts. A lot of ISPs work hard to make sure that trees stay trimmed to reduce ice damage. These ISPs know that not taking these extra precautions means the network is likely to get damaged and go out of service. There is nothing more satisfactory than having a fiber network that keeps humming along during and after a big storm. Unfortunately, mother nature often has different plans.

A 10-Gigabit Tier for Grants

One of the biggest flaws in the recent RDOF reverse auction grant was allowing fixed wireless technology to claim the same gigabit technology tier as fiber. The FCC should never have allowed this to happen. While there is a wireless technology that can deliver up to a gigabit of speed to a few customers under specific circumstances, fiber can deliver gigabit speeds to every customer in a network. This is particularly true in a rural setting where the short reach of gigabit wireless at perhaps a quarter mile is a huge limiting factor for using the technology in a rural setting.

But rather than continue to fight this issue for grant programs there is a much easier solution. It’s now easy to buy residential fiber technology that can deliver 10-gigabits of speed. There have been active Ethernet lasers capable of 10-gigabit speeds for many years. In the last year, XGS-PON has finally come into a price range that makes it a good choice for a new passive fiber network – and the technology can deliver 10-gigabit download speeds.

The FCC can eliminate the question of technology equivalency by putting fiber overbuilders into a new 10-gigabit tier. This could give funding fiber the priority over all other technologies. Fixed wireless will likely never be capable of 10-gigabit speeds. Even if that ever is made possible decades from now, by then fiber will have moved on to the next faster generation. Manufacturers are already looking at 40-gigabit speeds for the next generation of PON technology.

Cable company hybrid-fiber coaxial networks are not capable today of 10-gigabit speeds. These networks could possibly deliver speeds of around 6 or 7 gigabits, but only by removing all of the television signals and delivering only broadband.

I don’t know why it was so hard for the FCC to say no to gigabit fixed wireless technology. When the industry lobbied to allow fixed wireless into the gigabit tier, all the FCC had to do was to ask to see a working demo of wireless gigabit speeds working in a rural farm environment where farms are far apart. The FCC should have insisted that the wireless industry demonstrates how every rural household in the typical RDOF area can receive gigabit speeds. They should have been made to show the technology overcomes distance and line-of-sight issues. There is no such demo because the wireless technology can’t do this – at least not without building fiber and establishing a base transmitter at each farm. The FCC really got suckered by slick PowerPoints and whitepapers when they should have instead asked to see the working demo.

Don’t get me wrong – I don’t hate the new wireless technologies. There are small towns and neighborhoods in rural county seats that could really benefit from the technology. The new meshed networks, if fed by fiber, can superfast bandwidth to small pockets of households and businesses. This can be a really attractive and competitive technology.

But this is not fiber. Every rural community in America knows they want fiber. They understand that once you put the wires in place that fiber is going to be providing solutions for many decades into the future. I think if fiber is built right that it’s a hundred-year investment. Nobody believes this to be true of fixed wireless. The radios are all going to be replaced many times over the next hundred years and communities worry about having an ISP who will make that continual reinvestment.

But since there is such an easy way to fix this going forward, these arguments about gigabit wireless can be largely moot. If the FCC creates a 10-gigabit tier for grants, then only fiber will qualify. The fixed wireless folks can occupy the gigabit tier and leave most other technologies like low-orbit satellite to some even lower tier. The FCC made a mistake with RDOF that they can’t repeat going forward – the agency declared that other technologies are functionally equivalent to fiber – and it’s just not true.

Our Evolving Technologies

A client asked me recently for an update on all of the technologies used today to deliver broadband. The last time I talked about this topic with this client was three years ago. As I talked through each technology, it struck me that every technology we use for broadband is better now than three years. We don’t spend enough time talking about how the vendors in this industry keep improving technology.

Consider fiber. I recently have been recommending that new fiber builders consider XGS-PON. While this technology was around three years ago it was too expensive and cutting edge at the time to consider for most ISPs. But AT&T and Vodaphone have built enough of the technology that the prices for the hardware have dropped to be comparable to the commonly used GPON technology. This means we now need to start talking about FTTP as a 10-gigabit technology – a huge increase in capacity that blows away every other technology. Some improvements we see are more subtle. The fiber used for wiring inside buildings if far more flexible and bendable than three years ago.

There have been big improvements in fixed wireless technology. Some of this improvement is due to the FCC getting serious about providing more spectrum for rural fixed wireless. During the last three years, the agency has approved CBRS spectrum and white space spectrum that is now being routinely used in rural deployments. The FCC also recently approved the use of 6 GHz WiFi spectrum that will add even more horsepower. There have also been big improvements in the radios. One of the improvements that isn’t mentioned is new algorithms that speed up the wireless switching function. Three years ago, we talked about high quality fixed wireless speeds of 25 Mbps to 50 Mbps and now we’re talking about speeds over 100 Mbps in ideal conditions.

All three major cellular carriers are in the process of building out a much-improved fixed cellular broadband product. This has also benefited from new bands of frequencies acquired by the cellular carriers during the last three years. Three years ago, any customer with a cellular hotspot product complained about slow speeds and tiny monthly data caps. The new products allow for much greater monthly usage, up to unlimited and speeds are better than three years ago. Speeds are still largely a function of how far a home is from the closest cell site, so this product is still dreadful for those without good cellular coverage – but it means improved broadband with speeds up to 50 Mbps for many rural households.

Three years ago, the low-orbit satellites from Starlink were just hype. Starlink now has over 1,000 satellites in orbit and is in beta test mode with customers reporting download speeds from 50 Mbps to 150 Mbps. We’re also seeing serious progress from One Web and Jeff Bezos’s Project Kuiper, so this industry segment is on the way to finally becoming a reality. There is still a lot of hype, but that will die when homes can finally buy the satellite broadband products – and when we finally understand speeds and prices.

Three years ago, Verizon was in the early testing stage of fiber-to-the-curb. After an early beta test and a pause to improve the product, Verizon is now talking about offering this product to 25 million homes by 2025. This product uses mostly millimeter-wave spectrum to get from the curb to homes. For now, the speeds are reported to be about 300 Mbps, but Verizon says this will get faster.

We’ve also seen big progress with millimeter-wave mesh networks. Siklu has a wireless product that they tout as an ideal way to bring gigabit speeds to a small shopping district. The technology delivers a gigabit connection to a few customers and the broadband is then bounced from those locations to others. Oddly, some companies are talking about using this product to satisfy the rural RDOF grants, which is puzzling since the transmission distance is only a quarter-mile and also requires great line-of-sight. But expect to see this product pop up in small towns or retail districts all over the country.

Cable company technology has also improved over the last three years. During that time, a lot of urban areas saw the upgrade to DOCSIS 3.1 with download speeds now up to a gigabit. CableLabs also recently announced DOCSIS 4.0 which will allow for symmetrical gigabit plus speeds but which won’t be available for 3-5 years.

While you never hear much about it, DSL technology over copper has gotten better. There are new versions of G.Fast being used to distribute broadband inside apartment buildings that is significantly better than what was on the market three years ago.

Interestingly, the product that got the most hype during the last three years is 5G. If you believe the advertising, 5G is now everywhere. The truth is that there is no actual 5G yet in the market yet and this continues to be marketing hype. The cellular carriers have improved their 4G networks by overlaying new spectrum, but we’re not going to see 5G improvements for another 3-5 years. Unfortunately, I would bet that the average person on the street would say that the biggest recent telecom breakthrough has been 5G, which I guess shows the power of advertising and hype.

The Slow Death of Satellite TV?

There has been rumors for years about merging Dish Networks and Direct TV to try to gain as much market synergy as possible for the two sinking businesses. It’s hard to label these companies as failures just yet because between two companies collectively still had 21.8 million customers at the end of 2020 (DirectTV 13.0 million, Dish 8.8 million). This makes the two companies collectively the largest provider of cable TV, with Comcast at 19.8 million and Charter at 16.2 million.

But both companies have been bleeding customers in the last few years. In 2020, DirecTV lost over 3 million customers and Dish Networks lost nearly 600,000. Together, the two companies lost 14% of customers in 2020. This is not unusual in the industry when we saw Comcast lose 1.4 million cable customers during the same year.

Dish Networks CEO Charlie Ergen has been predicting for years that a merger of the two companies is inevitable. The two companies could save money on infrastructure and overheads to prop up the combined businesses.

There are a number of factors that make a merger complicated. AT&T divested 30% of DirecTV earlier this year to TPG Capital. That included TV offered by DirecTV, U-Verse, and AT&T TV.

Probably the biggest long-term trend that bodes poorly for satellite TV is the federal government’s push to bring better broadband to rural America. Selling TV to customers with poor broadband is still the sweet spot for the two companies. As the number of homes with good broadband rises, the prospects for satellite TV sinks.

My firm has been doing community surveys for twenty years and we’ve noticed a big change in satellite TV penetrations. A decade ago, I expected to find a 15% market share of satellite TV in almost any town that we surveyed. But in the last few years, people in towns appear to be the ones that have bailed on satellite TV. It’s rare for us to find more than a few percent of households in towns who now buying satellite TV. Households have moved to the web to find video content, with the big losers being satellite TV and landline cable companies.

I also notice the same thing in traveling around the country. It used to be that you’d see satellite dishes peppered in every neighborhood. But I’ve noticed that satellite dishes are becoming a rarity. I know from walking in my neighborhood that only one house still has satellite TV. Just a few years ago there were many more.

Finally, these two companies are both saddled with the ever-increasing programming costs that have plagued the whole industry. Cable customers everywhere have rate fatigue as prices are increased every year to account for higher programming costs. Satellite TV is like the rest if the industry and is pricing itself out of the budget range for the average household.

The two companies are also each saddled with a lot of current debt. Craig Moffett, of MoffettNathanson recently estimated that the combined companies might not have a valuation of more than $1 billion – a bad harbinger for a merger.

It’s hard to picture any investor group that would want to back this merger. The whole idea behind a merger is that the combined company is worth more than the individual pieces. But even if the combined satellite companies were able to cut costs with a merger, it seems likely that any savings would quickly get subsumed by continued customer losses.

It’s not unrealistic to think that a decade from now that this industry will disappear. Maybe the companies can hang on longer even as the number of customers continues to drop – but the math of doing so doesn’t bode well. The end of the satellite TV industry would feel odd to me. I witnessed the meteoric growth of the industry and watched satellite dishes popping up everywhere in the US. Satellite TV could fall into the category of huge tech industries that popped into existence, grew, and then died within our adult lifetime. I’m betting that we’re not far off from the day when kids will have no idea what a satellite dish is, just as they now stare perplexed at dial telephones.

To 5.5G and Beyond

I recently saw an article in FierceWireless that reports that Huawei thinks we are going to need an intermediate step between 5G and 6G, something like 5.5G. To me, this raises the more immediate question about why we are not talking about the steps between 4G and 5G?

The wireless industry used to tell the truth about cellular technology. You don’t need to take my word for it – search Google for 3.5G and you’ll find mountains of articles from 2010 to 2015 that talked about 3.5G as an important intermediate step between 3G and 4G. It was clearly understood that it would take a decade to implement all of the specifications that defined 4G, and industry experts, manufacturers, and engineers all regularly debated about the level of 4G implementation. Few people realize that we didn’t have the first fully 4G compliant cell site until late 2018. Up until then, everything that was called 4G was something a little less than 4G. Interestingly, we debated the difference between 3.1G and 3.2G, but once the industry hit what might be considered as 3.5G, the chatter stopped, and the industry leaped to labeling everything as 4G.

That same industry hype that didn’t want to talk about 3.8G has remained intact, and somehow magically, we leaped to calling the next generation technology 5G before even one of the new 5G technologies has been implemented in the network. All we’ve done so far is to layer on new spectrum bands onto 4G phones and labeled that as 5G. These new spectrum bands require phones that can receive the new frequencies, which phone manufacturers gleefully label as 5G phones. I’m not convinced that we are even yet at 4.1G and yet the industry has fully endorsed labeling the first baby steps towards 5G as if we have full 5G.

I have to laugh to see articles already talking about what comes next after 5G. It’s like already picking the best marketing names for the self-driving hovercars that will be replacing regular self-driving cars. We are only partway down the path of implementing self-driving cars that people are ready to buy and trust. The government wouldn’t let a car manufacturer falsely declare it has a fully-self driving car – but we seem to have no problem allowing cellular companies to pronounce having 5G technology that doesn’t yet exist.

Back to the article about 6G. Huawei suggests that 5.5G would be 10 times faster than the current 5G specification and with lower latency. Unfortunately for this suggestion, we just suffered through a whole year of Verizon TV ads showing cellphones achieving gigabit plus speeds. It’s almost as if Huawei hasn’t seen the Verizon commercials and doesn’t know that the US already has 5.5G. I’m thrilled to be the first one to report that the US has already won the 5.5G race!

But it’s also somewhat ludicrous to be talking about 5.5G as an intermediate step on the way to 6G. The next generation of wireless technology we’re labeling as 6G will use terahertz spectrum. The wavelengths of those frequencies are so small that a beam of terahertz frequency beamed from a cellular tower will dissipate before it hits the ground. Even so, the technology holds out a lot of promise for providing extremely high bandwidth for indoor communications. But faster 5G is not an intermediate spot between today’s cellular technology and terahertz-based technology.

Interestingly, there could have been an intermediate step. We still have a long way to go to harness millimeter-wave spectrum in the wild. These frequencies require pure line-of-sight and pass through virtually nothing. I would expect over the next decade or two that lab scientists will find much better ways to propagate and use millimeter-wave spectrum.

But the cellular industry already claims it has solved all of the issues with millimeter-wave spectrum and already claim it as part of today’s 5G solution. It’s going to be anticlimactic when scientists announce breakthroughs in ways to use millimeter-wave spectrum that the cellular industry has already been claiming. Using millimeter-wave spectrum to its fullest capability could have been 5.5G. I can’t wait to see what the industry claims instead.

Reporting the Broadband Floor

I want to start by giving a big thanks to Deb Socia for today’s blog. I wrote a recent blog about the upcoming public reporting process for the FCC maps. In that blog, I noted that ISPs are going to be able to continue to report marketing speeds in the new FCC mapping. An ISP that may be delivering 3 Mbps download will continue to be able to report broadband speeds of 25/3 Mbps as long as that is marketed to the public. This practice of allowing marketing speeds that are far faster than actual speeds has resulted in a massive overstatement of broadband availability. This is the number one reason why the FCC badly undercounts the number of homes that can’t get broadband. The FCC literally encourages ISPs to overstate the broadband product being delivered.

In my Twitter feed for this blog, Deb posted a brilliant suggestion, “ISPs need to identify the floor instead of the potential ceiling. Instead of ‘up to’ speeds, how about we say ‘at least’”.

This simple change would force some honesty into FCC reporting. This idea makes sense for many reasons. We have to stop pretending that every home receives the same broadband speed. The speed delivered to customers by many broadband technologies varies by distance. Telco DSL speeds get noticeably slower the further they are transmitted. The fixed wireless broadband delivered by WISPs loses speed with distance from the transmitting tower. The fixed cellular broadband that the big cellular companies are now pushing has the same characteristic – speeds drop quickly with the distance from the cellular tower.

It’s a real challenge for an ISPs using any of these technologies to pick a representative speed to advertise to customers – but customers want to know a speed number. DSL may be able to deliver 25/3 Mbps for a home that’s within a quarter-mile of a rural DSLAM. But a customer eight miles away might be lucky to see 1 Mbps. A WISP might be able to deliver 100 Mbps download speeds within the first mile from a tower, but the WISP might be willing to sell to a home that’s 10 miles away and deliver 3 Mbps for the same price. The same is true for the fixed cellular data plans recently being pushed by A&T, Verizon, and T-Mobile. Customers who live close to a cell tower might see 50 Mbps broadband, but customers further away are going to see a tiny fraction of that number.

The ISPs all know the limitations of their technology, but the FCC has never tried to acknowledge how technologies behave in real markets. The FCC mapping rules treat each of these technologies as if the speed is the same for every customer. Any mapping system that doesn’t recognize the distance issue is going to mostly be a huge fiction.

Deb suggests that ISPs must report the slowest speed they are likely to deliver. I want to be fair to ISPs and I suggest they report both the minimum “at least” speed and the maximum “up to” speed. Those two numbers will tell the right story to the public because together they provide the range of speeds being delivered in a given Census. With the FCC’s new portal for customer input, the public could weigh in on the “at least” speeds. If a customer is receiving speeds slower than the “at least” speeds, then, after investigation, the ISP would be required to lower that number in its reporting.

This dual reporting will also allow quality ISPs to distinguish themselves from ISPs that cut corners. If a WISP only sells service to customers within 5 or 6 miles of a transmitter, then the difference between its “at least” speeds and its “up to” speeds would be small. But if another WISP is willing to sell a crappy broadband product a dozen miles from the transmitter, there would be a big difference between its two numbers. If this is reported honestly, the public will be able to distinguish between these two WISPs.

This dual reporting of speeds would also highlight the great technologies – a fiber network is going to have a gigabit “at least” and “up to” speed. This dual reporting will end the argument that fixed wireless is a pure substitute for fiber – which it clearly is not. Let the two speeds tell the real story for every ISP in the place of marketing hype.

I’ve been trying for years to find a way to make the FCC broadband maps meaningful. I think this is it. I’ve never asked this before, but everybody should forward this blog to the FCC Commissioners and politicians. This is an idea that can bring some meaningful honesty into the FCC broadband maps.

Controlling Fiber Construction Costs

It’s obvious with all of the grant money coming downhill from the federal government that there is going to be a lot of fiber constructed over the next year or two, and much of it by municipalities or other entities that have not built fiber before. Today’s blog talks about issues that can increase the cost of building fiber – an important topic since cost overruns could be devastating to an entity that is largely funded with grants.

I think everybody knows of cases where the funding for infrastructures has gone off the rails, with the final cost of a project being much higher than what was originally funded. I can remember when I last lived near DC and watched the cost of a new Beltway bridge over the Potomac come in at more than twice the original cost estimate. I can remember instances of big cost overruns for infrastructure like schools and roads. Cost overruns can also easily happen on fiber projects.

The number one issue facing the whole industry right now is shortages in the supply chain. I have clients seeing relatively long delivery times for fiber and fiber electronics. New entities that have never built fiber are going to go to the end of the line for receiving fiber. To the extent that grant-funded projects come with a mandated completion date, this is going to be an issue for some projects.

But more importantly, labor-related costs for building fiber are going to rise (and have already started doing so). With a huge volume of new projects, there will be a big shortage of consultants, engineers, and construction contractors. Like always happens in times of high demand, this means labor rates are going to rise – and that’s even assuming you can find somebody to work on a small project. One of the hidden facts in the industry is that very few construction companies build 100% with staff and heavily rely on subcontractors. Those subcontractors are going to be bid away from small projects to get more lucrative work for big projects. Even ISPs that build with their own crews are going to see staff lured away by higher pay rates. If you estimated the cost of building fiber a few years ago, the labor component of those estimates is now too low. Another issue to consider is that some grants require paying labor at prevailing wages, which means at metropolitan rates. This alone can add 15% or more to the cost of a rural fiber project.

The biggest crunch will be consultants and engineers who work for smaller projects. I’m in this category. There are only a handful of good consultants and engineers and we’re already seeing that we are going to be swamped and fully booked before this year is over. Don’t be surprised if you hear that your preferred vendors are not taking on new business.

The other big gotcha in fiber construction projects is change orders. This means any event that gives a construction contractor a chance to charge more than the original proposed cost of construction. Using the example of the bridge that went over budget – most of the extra costs came through change orders.

There are construction firms that bid low for projects with the expectation that they’ll make a lot more from change orders. You want to interview other communities that used the contractors you are considering. But a lot of change order costs can be laid at the feet of the project owner. It’s not unusual to see a project go out to bid that is not fully engineered and thought through. Changing your mind on almost any aspect of a project can mean extra costs and cost overruns. Here are just a few examples of situations I have seen on projects that added to the costs:

  • After the first neighborhood of a project was built, the client decided that they didn’t like fiber pedestals and wanted everything put into buried handholes. That meant ripping and replacing what had already been built and completely swapping inventory.
  • A contractor ran into a big underground boulder that was incredibly difficult to bore through. This was a city network, and the city would not allow an exception to build shallower only at this boulder and insisted on boring through it – at a huge, unexpected cost.
  • I worked on a project where the original specification was to build past every home and business in the community. Once construction was started the client decided to build fiber to every street, including the ones with no current buildings. That’s a valid decision to make, but it added a lot to construction costs.

I could write a week worth of blogs listing situations that added to construction costs. The bottom line for almost all of these issues is that the fiber builder needs to know what they want before a project starts. There should be at least preliminary engineering that closely estimates the cost of construction before starting. Project owners also need to be flexible if the contractor points out opportunities to save costs. But my observation is that a lot of change orders and cost overruns come from network owners that don’t know what they want before construction starts.

Rural Redundancy

This short article details how a burning tree cut off fiber optic access for six small towns in Western Massachusetts. This included Ashfield, Colrain, Cummington, Heath, Plainfield, and Rowe. I not writing about this today because this fiber cut was extraordinary, but because it’s unfortunately very ordinary and usual. There are fiber cuts every day that isolate communities by cutting Internet access.

It’s not hard to understand why this happens in rural America. In much of the country, the fiber backbone lines that support Internet access to rural towns use the same routes that were built years ago to support telephone service. The telephone network is configured using a hub and spoke, and all of the towns in a region have a single fiber line into a single central tandem switch that was the historic focal point for regional telephone switching.

Unfortunately, a hub and spoke network (which resembles the spokes of a wagon wheel) does not have any redundancy. Each little town or clusters of towns typically had a single path to reach the telephone tandem – and today to reach the Internet.

The problem is that an outage that historically would have interrupted telephone service now interrupts broadband. This one cut in Massachusetts is a perfect example of how reliant we’ve become on broadband. Many businesses shut down completely without broadband. Businesses take orders and connect with customers in the cloud. Credit card processing happens remotely in the cloud. Businesses are often connected to distant corporate servers that provide everything from software connectivity to voice over IP. A broadband outage cuts off students taking classes from home and adults working from home. An Internet outage cripples most work-from-home people who work for distant employers. A fiber cut in a rural area can also cripple cell service if the cellular carriers use the same fiber routes.

The bad news is that nobody is trying to fix the problem. The existing rural fiber routes are likely owned by the incumbent telephone companies and they are not interested in spending money to create redundancy. Redundancy in the fiber world means having a second fiber route into an area so that the Internet doesn’t go dead if the primary fiber is cut. One of the easiest ways to picture a redundant solution is to picture a ring of fiber that would be equivalent to the rim of the wagon wheel. This fiber would connect all of the ‘spokes’ and provide am alternate route for Internet traffic.

To make things worse, the fiber lines reaching into rural America are aging. These were some of the earliest fiber routes built in the US, and fiber built in the 1980s was not functionally as good as modern fiber. Some of these fibers are already starting to die. We’re going to be faced eventually with the scenario of fiber lines like the one referenced in this article dying, and possibly not being replaced. A telco could use a dying fiber line as a reason to finally walk away from obsolete copper DSL in a region and refuse to repair a dying fiber line. That could isolate small communities for months or even a few years until somebody found the funding to replace the fiber route.

There have been regions that have tackled the redundancy issue. I wrote a blog last year about Project Thor in northwest Colorado where communities banded together to create the needed redundant fiber routes. These communities immediately connected critical infrastructure like hospitals to the redundant fiber and over time will move to protect more and more Internet traffic in the communities from routine and crippling fiber cuts.

This is a problem that communities are going to have to solve on their own. This is not made easier by the current fixation of only using grants to build last-mile connectivity and not middle-mile fiber. All of the last mile fiber in the world is useless if a community can’t reach the Internet.