Those Troublesome FCC Maps

The FCC is in the process of reworking its broadband maps. The task of doing so is complicated and the new maps are likely going to be a big mess at first. In a recent article in Slate, Mike Conlow discusses two of the issues that the FCC will have trouble getting right.

One issue is identifying rural homes and businesses. We know from recent auctions that the FCC assumption on the number of homes in a Census block is often wrong.  It’s hard to count homes without broadband if we don’t know how to count homes in general. The mapping firm CostQuest suggests counting homes using satellite data. But the article shows how hard that can be. For instance, it shows a typical farm complex that has multiple buildings. How does an automated mapping program count the homes in this situation? Mixed among the many farm buildings could be zero homes, one home, or several homes.

If you have ever looked at satellite maps in West Virginia, you see the opposite problem. There are homes under total tree cover that aren’t seen by a satellite. To really complicate matters, there are several million rural vacation homes in the country, many not more than a shack or small cabin, many without power. How is satellite mapping going to distinguish a cabin without power from a home with full-time residents? It’s unlikely that a national attempt to count homes using satellite data is going to get this even close to right – but it means many millions to CostQuest to try.

The second mapping issue comes from ISPs that will have to draw polygons around service areas that have broadband or can get broadband within 10 days of a service request. The article shows a real example where it’s easy to draw a polygon along roads that will leave out homes that are back long lanes or driveways.

When ISPs convert to this new mapping with the polygons, especially if housing data comes from satellite imagery, the resulting maps are going to have a lot of problems. The first iterations of the new maps will differ significantly from today’s mapping and it’s going to be nearly impossible to understand the difference between old and new.

As complicated as these two issues are, they are not the biggest problem with the mapping. The big issue that nobody in Congress or the FCC wants to talk about is that it’s nearly impossible to know the broadband speed delivered to a home. For most broadband technologies, the speed being delivered changes from second to second, and from minute to minute. If you don’t think that’s true, then run a speed test at home a few dozen times today, every few hours. Unless your broadband comes from a stable fiber network, the chances are that you’ll get a wide range of speed test readings. After taking these multiple tests, tell me the broadband speed at your house. If it’s hard to define the speed for a single home, how are we supposed to tackle this in mass?

But let’s just suppose that in some magical way that the FCC could figure out the average speed at a home over time. That still doesn’t help with the FCC mapping because ISPs will be allowed to report marketing speeds and not actual speeds to the FCC. The Slate article suggests that the biggest problem in today’s maps comes from counting broadband by Census blocks – where if one home has fast broadband, the entire Census block is counted as fast. That is a much smaller issue than people assume. The majority of misstated rural speeds today come instead from ISPs that claim they sell a speed that is much faster than what is delivered. Big telcos today report rural areas as having 25/3 capability for no reason other than the ISP says so – when in reality there might not be even one customer in that area that has even 10/1 Mbps DSL. The big telcos have successfully been lying about speed capability for years as a way to shield areas against being overbuilt by grants. Recall that Frontier tried to sneak in over 16,000 speed changes for Census blocks just before the deadline of the RDOF grant. The new mapping is not going to be a whit better as long as ISPs can continue to lie about speeds with impunity.

There are a few simple ways to fix some of the worst problems with the maps. First, the FCC could declare that all DSL is no longer broadband and stop bothering to measure DSL speeds. They could do the same with high-orbit satellites that have huge latency issues. But even doing this solves only a portion of the problem. There are still numerous WISPs that report marketing speeds that are far faster than actual speeds. The FCC maps are also about to get inundated by the cellular companies making the same overstated speed claims for fixed rural cellular broadband.

What is so dreadful about all of this is that a rural home may have no real option for broadband but might have FCC maps that show they can buy fast broadband from DSL, one or more WISPs, and one or more fixed cellular providers. The FCC is going to count such a home as a success because it has competition between multiple ISPs – when in reality the home might not have even one real broadband option.

I hate to be one of the few people that keep saying this – but I’m sure that the new FCC maps won’t be any better than the current ones. Unfortunately, by the time that becomes apparent, Congress will have assumed the mapping is good and will have moved on to other issues.

AT&T Increasing Fiber Speeds

AT&T announced recently that it was unilaterally increasing the speeds for fiber customers. Customers that had the 100 Mbps service have been bumped to 300 Mbps. Customers with 300 Mbps are being bumped to 500 Mbps. AT&T continues to offer the gigabit tier.

It’s clear why the company made the change because cable companies are doing the same thing. In December 2020, Charter increased its starting speed to 200 Mbps. Comcast increased speeds across the board in February. Its Performance product went from 60 Mbps to 100 Mbps. Performance Pro went from 150 Mbps to 200 Mbps. Blast went from 250 Mbps to 300 Mbps. Extreme went from 400 Mbps to 600 Mbps. In some markets, Comcast increased the top speed from 1 Gbps to 1.2 Gbps.

I’m sure that this latest round of speed increases by the cable companies was prompted by customers voicing dissatisfaction during the pandemic. We’ve learned that it costs little or nothing to increase speeds, except when increasing speed for a customer who has felt throttled. Unfortunately for cable customers, these speed increases aren’t going to bring them what they are hoping for since the complaints during the pandemic were not about download speeds, but upload speeds. I’m guessing the latest round of cable company speed increases didn’t move the meter much for upload speeds.

Fiber customers see a big increase in upload speeds with the AT&T speed increases since the company offers symmetrical broadband on fiber. But it’s unlikely that many homes felt constrained with uploading during the pandemic on the company’s 100 Mbps fiber service.

Cable companies have unilaterally increased speeds many times. If you go back to 2000 you would have found both Charter and Comcast offering 1 Mbps broadband. At that time, the companies were in a real battle with DSL since both technologies offered nearly identical speeds. I can remember when Charter moved from 1 Mbps to 3 Mbps, to 6 Mbps, to 15 Mbps, to 30 Mbps, to 60 Mbps, and to 100 Mbps during the twenty years.

By the time that the cable companies got to 30 Mbps, they were leaving DSL behind, and over time they have annually captured a decent piece of the DSL market. It’s hard to understand, other than price, why somebody would stick with DSL in today’s market.

The pendulum might be swinging back the other direction, at least a bit. AT&T added over 1 million fiber customers in 2021, and you have to think a lot of the additions came from cable customers switching to faster fiber broadband. AT&T has now built fiber to pass 14.5 million homes and businesses and says it’s going to build past 2 million more residential customers and 1 million business customers in 2021.

I have to wonder how much more speed inflation we’ll ever see. At the rate that cable companies have been arbitrarily increasing rates, we can’t be too far from seeing everybody being offered a gigabit.

It’s worth noting that just because a cable company increases speeds that there is no guarantee that a given household will see anything faster. To some degree, the numbers just announced by Comcast and Charter are marketing speeds. There are local constraints in many neighborhood networks that restrict speeds. Some homes will need a new modem to achieve the faster advertised speeds. We can’t forget the drawn-out confrontation between the New York Public Service Commission and Charter over a huge number of homes that still had old modems that couldn’t receive the advertised speeds. Homes were promised 100 Mbps and were getting less than 20 Mbps. The fight got so bad that the State started the process of tossing Charter out of the state.

By contrast, fiber ISPs tend to deliver all or most of the speed they advertise. In speed tests, we usually see speeds on fiber within a few percent of the advertised speeds – and sometimes faster. But I don’t think the cable companies are too worried about AT&T and other fiber providers. At this point, the cable companies collectively probably pass nearly 100 million homes for which they are the only fast broadband alternative.

Bringing Back Communications Etiquette

Ahoy-hoy. That’s the way that Alexander Graham Bell suggested that we should answer the telephone. I’m sure Mr. Bell would be amazed to see that 150 years later that we routinely can talk to each other face-to-face on Zoom. I’ve been recently reading about the early days of telephony and today’s blog is a lighthearted look back at how the public reacted to the newly found ability to electronically reach out to friends and family. I must say that a few of the old etiquette suggestions don’t sound too bad in the day of Zoom calls.

When phones were first introduced, telephone network owners encouraged keeping calls short. In early telephone technology, a phone call required tying up wires between two callers – there was not yet any technology that allowed combining multiple calls to share the same piece of copper reaching between exchanges or cities. A call from New York to San Francisco completely monopolized a copper connection from coast to coast. An early British calling guide suggested that people shorten calls by not saying “hello” or wasting any time on any introductory pleasantries but get straight to the point when calling. I remember doing traffic studies in the 70s when the average hold-time for local calls was only 3 minutes. Tell me the last time you had a 3-minute Zoom call.

The biggest complaint of early operators was that people would walk away from making calls. There was often a significant wait for somebody who wanted to make a long-distance call – operators had to secure a free copper path from end-to-end in the network. After going through all of the work to set up a call, operators often found that the call originator had given up and walked away from the phone. Early phone books admonished callers to not make a call if they didn’t have the time to wait for it to occur.

My favorite early practice is that some early phone books discouraged calling before 9:00 AM or after 9:00 PM. This was partially due to phone companies not wanting to staff too many operators 24 hours per day, but it was also considered impolite to disturb people too early or too late. Many smaller telephone companies simply stopped manning the operator boards during the night.

Telephone calling was such a drastic societal change that phone companies routinely issued calling guides that detailed calling etiquette for using the new telephone contraption. There was obviously no caller ID in the early days, and operators often did not stay on the line to announce a call. Phone etiquette books suggested it was impolite to ask who was calling and that people should guess the identity of the caller rather than ask. Some phone books suggested that anybody answering the phone should tell their telephone number so that a caller would know if they had called the wrong number.

One of my favorite early telephone etiquette suggestions is that people should not use the telephone to invite people to a formal occasion. Something that important should only be done in writing so that the invitee would have all of the details of the invitation in writing.

Phonebooks included diagrams showing that the proper distance to hold the phone from the mouth was 1.5 inches. People were admonished not to shout into the handset. One California phonebook suggested that gentlemen trim mustaches so that they could be clearly heard on the phone.

Of course, at the turn of the twentieth-century foul language was not tolerated. Somebody cursing on the phone and being heard by an operator stood a chance of losing their phone line or even getting a knock on the door by the police. Wouldn’t that concept throw a big wrench in the current first amendment controversies about what is allowable online speech?

Still Waiting for IPv6

It’s now been a decade since the world officially ran out of blocks of IP addresses. In early 2011 the Internet Assigned Numbers Authority (IANA) announced that it had allocated the last block of IPv4 addresses and warned ISPs to start using the new IPv6 addresses. But here we are a decade later and not one of my clients has converted to IPv6.

Networks widely use IP addresses for devices in the network. Every cellphone, computer, network router, and modem is assigned an IP address so that ISPs can route traffic to the right device. The world adopted IPv4 in 1982. This is a 32-bit address and provided almost 4.3 billion IP addresses. That was enough addresses until 2011. IPv6 uses a 128-bit IP address. This provides for 3.4 trillion trillion IP addresses, which ought to carry mankind for centuries to come. Like most of us, I hadn’t thought about this in a long time and recently went to look to see how much of the world has converted to IPv6.

At the end of 2020, around 30% of all web traffic was being routed using IPv6. A lot of the biggest US ISPs have converted to IPv6 inside of networks. At the end of 2020, Comcast had converted 74% of its traffic to IPv6; Charter was at 54%. In the cellular world, both Verizon and AT&T are routing over 80% of traffic on IPv6 while T-Mobile is close to 100%. Around the world, some of the biggest ISPs have converted to IPv6. India leads the world with over 62% countrywide adoption at the end of 2020, with the US in fourth at over 47% adoption.

But the big caveat with the above statistics is that a lot of the big ISPs are using IPv6 inside the networks but are still communicating with the outside world using IPv4. After all of the alarms were sounded in 2011, why haven’t we made the transition?

First, carriers have gotten clever in finding ways to preserve IPv4 IP addresses. For example, small ISPs and corporations are using a single external IP address to identify the entire network. This allows for the assignment of imaginary IP addresses inside the network to reach individual customers and devices, much like CLECs have reduced the number of telephone numbers needed by switching internally with imaginary numbers.

There is an extra cost for any ISP that wants to fully convert to IPv6. IPv6 is not backward compatible with IPv4, and any company that wants to route externally with IPv6 needs to maintain what is called a dual stack, meaning that every transaction in and out of the network has to route using both protocols. This adds expense but more importantly slows down the routing.

It’s also impossible to convert a network to IPv6 until all devices using the network are IPv6 compatible. This becomes less and less of an issue every year, but every ISP network still has customers and devices on the network that are not IPv6 compatible. Those customers still using a 12-year old WiFi router would go dead with a full conversion to IPv6. This is one of the primary reasons that the big ISPs and cellular carriers aren’t at 100% IPv6. There are still a million folks using old flip phones that can’t be addressed with IPv6.

There is a definite cost for not converting to IPv6. There is a grey market for buying IPv4 IP addresses and the cost per IP address has climbed in recent years. The typical price to buy an IPv4 address ranged from $24 to $29 during 2020. With all of the grant money being handed out I expect the creation of a number of new ISPs in the next year. Many of them are going to be surprised that they need to spend that much to get IP addresses.

The main reason that the conversion hasn’t happened is that nobody is pushing it. The world keeps functioning using IPv4 and no ISP feels threatened by not considering the conversion. The first small ISPs that take the plunge to IPv6 will pay the price of being first with the technology – and nobody wants to be that guinea pig. Network purists everywhere are somewhat disgusted that their employers won’t take the big plunge – but even a decade after we ran out of IP numbers, it’s still not the right time to tackle the conversion.

I have no idea what will finally set off a rush to convert because it inevitably will happen. But until then, this will be a topic that you’ll barely hear about.

Insuring Fiber Networks

A few times each year I get the question of where a new ISP should go to get insurance for a fiber network. The question comes from new ISPs that are hoping that they can buy insurance that will compensate them for catastrophic damage to a new fiber network.

Everybody in the industry knows about many examples of catastrophic fiber damage. Fiber networks along the coasts can be devastated by hurricanes. The second biggest cause of network damage is probably ice storms, which can knock down wires across huge geographic areas. We’ve seen networks damaged by heavy floods. Networks are sometimes hit by tornadoes. In this last year we have huge examples of poles and fiber networks destroyed in the huge forest fires in the west, and occasionally in Appalachia.

The damage can be monumental. We saw hurricanes a few years ago that broke every utility pole in the Virgin Islands. We’ve seen a few towns along the Gulf coast be leveled by hurricanes. The fires last year burnt large swaths of forests and utility poles and burnt and melted all of the wires.

Asking for insurance against such damages sounds like a sensible question since we seem to be able to buy insurance for almost anything. But the bad news for those looking for insurance for a fiber (or electric) network is that such insurance doesn’t exist – at least not in any affordable form.

This mystifies people who wonder why they can buy insurance to protect a $50 million building but not a $50 million fiber network. The answer is that a building owner can take multiple steps to protect a building. For example, an insurer might insist on a sprinkler system throughout a building to protect against fire damage. But there is nothing that a fiber network owner can do to brace against the ravages of mother nature. A large network can be badly damaged at any time and can be hit multiple times. In just the last few years, the City of Ruston, Louisiana was hit by a devastating tornado and then two subsequent hurricanes.

So how do owners get compensated after major network damage? The answer is FEMA. When there has been bad damage from the natural disasters listed earlier, a Governor and President can declare an emergency, and this unleashes state and federal funds to help pay to fix the damages. People often wonder about the size of federal funding after a disaster – the government isn’t only helping to fix destroyed roofs after a hurricane, but also the telco, cable, and power networks.

If you press an insurance company hard enough you can get damage insurance for fiber. I had a client who won an RFP to build fiber for a rural school, and the school insisted that the network be insured. Even after providing evidence that this is not a normal insurance policy, the school system insisted. My client bought a 2-year insurance damage policy for the newly built fiber that was priced at almost 20% of the cost of the fiber.

I remember when Fire Island in New York went without broadband and cellular coverage for well over a year after Hurricane Sandy while Verizon, the New York Public Service Commission, and FEMA argued about how the network was to be rebuilt. It’s far better to protect against catastrophic damage whenever possible. I have clients in storm-prone areas that have paid the extra cost to bury fiber networks. I have clients in flood zones that place electronics huts on stilts. A lot of ISPs work hard to make sure that trees stay trimmed to reduce ice damage. These ISPs know that not taking these extra precautions means the network is likely to get damaged and go out of service. There is nothing more satisfactory than having a fiber network that keeps humming along during and after a big storm. Unfortunately, mother nature often has different plans.

A 10-Gigabit Tier for Grants

One of the biggest flaws in the recent RDOF reverse auction grant was allowing fixed wireless technology to claim the same gigabit technology tier as fiber. The FCC should never have allowed this to happen. While there is a wireless technology that can deliver up to a gigabit of speed to a few customers under specific circumstances, fiber can deliver gigabit speeds to every customer in a network. This is particularly true in a rural setting where the short reach of gigabit wireless at perhaps a quarter mile is a huge limiting factor for using the technology in a rural setting.

But rather than continue to fight this issue for grant programs there is a much easier solution. It’s now easy to buy residential fiber technology that can deliver 10-gigabits of speed. There have been active Ethernet lasers capable of 10-gigabit speeds for many years. In the last year, XGS-PON has finally come into a price range that makes it a good choice for a new passive fiber network – and the technology can deliver 10-gigabit download speeds.

The FCC can eliminate the question of technology equivalency by putting fiber overbuilders into a new 10-gigabit tier. This could give funding fiber the priority over all other technologies. Fixed wireless will likely never be capable of 10-gigabit speeds. Even if that ever is made possible decades from now, by then fiber will have moved on to the next faster generation. Manufacturers are already looking at 40-gigabit speeds for the next generation of PON technology.

Cable company hybrid-fiber coaxial networks are not capable today of 10-gigabit speeds. These networks could possibly deliver speeds of around 6 or 7 gigabits, but only by removing all of the television signals and delivering only broadband.

I don’t know why it was so hard for the FCC to say no to gigabit fixed wireless technology. When the industry lobbied to allow fixed wireless into the gigabit tier, all the FCC had to do was to ask to see a working demo of wireless gigabit speeds working in a rural farm environment where farms are far apart. The FCC should have insisted that the wireless industry demonstrates how every rural household in the typical RDOF area can receive gigabit speeds. They should have been made to show the technology overcomes distance and line-of-sight issues. There is no such demo because the wireless technology can’t do this – at least not without building fiber and establishing a base transmitter at each farm. The FCC really got suckered by slick PowerPoints and whitepapers when they should have instead asked to see the working demo.

Don’t get me wrong – I don’t hate the new wireless technologies. There are small towns and neighborhoods in rural county seats that could really benefit from the technology. The new meshed networks, if fed by fiber, can superfast bandwidth to small pockets of households and businesses. This can be a really attractive and competitive technology.

But this is not fiber. Every rural community in America knows they want fiber. They understand that once you put the wires in place that fiber is going to be providing solutions for many decades into the future. I think if fiber is built right that it’s a hundred-year investment. Nobody believes this to be true of fixed wireless. The radios are all going to be replaced many times over the next hundred years and communities worry about having an ISP who will make that continual reinvestment.

But since there is such an easy way to fix this going forward, these arguments about gigabit wireless can be largely moot. If the FCC creates a 10-gigabit tier for grants, then only fiber will qualify. The fixed wireless folks can occupy the gigabit tier and leave most other technologies like low-orbit satellite to some even lower tier. The FCC made a mistake with RDOF that they can’t repeat going forward – the agency declared that other technologies are functionally equivalent to fiber – and it’s just not true.

Our Evolving Technologies

A client asked me recently for an update on all of the technologies used today to deliver broadband. The last time I talked about this topic with this client was three years ago. As I talked through each technology, it struck me that every technology we use for broadband is better now than three years. We don’t spend enough time talking about how the vendors in this industry keep improving technology.

Consider fiber. I recently have been recommending that new fiber builders consider XGS-PON. While this technology was around three years ago it was too expensive and cutting edge at the time to consider for most ISPs. But AT&T and Vodaphone have built enough of the technology that the prices for the hardware have dropped to be comparable to the commonly used GPON technology. This means we now need to start talking about FTTP as a 10-gigabit technology – a huge increase in capacity that blows away every other technology. Some improvements we see are more subtle. The fiber used for wiring inside buildings if far more flexible and bendable than three years ago.

There have been big improvements in fixed wireless technology. Some of this improvement is due to the FCC getting serious about providing more spectrum for rural fixed wireless. During the last three years, the agency has approved CBRS spectrum and white space spectrum that is now being routinely used in rural deployments. The FCC also recently approved the use of 6 GHz WiFi spectrum that will add even more horsepower. There have also been big improvements in the radios. One of the improvements that isn’t mentioned is new algorithms that speed up the wireless switching function. Three years ago, we talked about high quality fixed wireless speeds of 25 Mbps to 50 Mbps and now we’re talking about speeds over 100 Mbps in ideal conditions.

All three major cellular carriers are in the process of building out a much-improved fixed cellular broadband product. This has also benefited from new bands of frequencies acquired by the cellular carriers during the last three years. Three years ago, any customer with a cellular hotspot product complained about slow speeds and tiny monthly data caps. The new products allow for much greater monthly usage, up to unlimited and speeds are better than three years ago. Speeds are still largely a function of how far a home is from the closest cell site, so this product is still dreadful for those without good cellular coverage – but it means improved broadband with speeds up to 50 Mbps for many rural households.

Three years ago, the low-orbit satellites from Starlink were just hype. Starlink now has over 1,000 satellites in orbit and is in beta test mode with customers reporting download speeds from 50 Mbps to 150 Mbps. We’re also seeing serious progress from One Web and Jeff Bezos’s Project Kuiper, so this industry segment is on the way to finally becoming a reality. There is still a lot of hype, but that will die when homes can finally buy the satellite broadband products – and when we finally understand speeds and prices.

Three years ago, Verizon was in the early testing stage of fiber-to-the-curb. After an early beta test and a pause to improve the product, Verizon is now talking about offering this product to 25 million homes by 2025. This product uses mostly millimeter-wave spectrum to get from the curb to homes. For now, the speeds are reported to be about 300 Mbps, but Verizon says this will get faster.

We’ve also seen big progress with millimeter-wave mesh networks. Siklu has a wireless product that they tout as an ideal way to bring gigabit speeds to a small shopping district. The technology delivers a gigabit connection to a few customers and the broadband is then bounced from those locations to others. Oddly, some companies are talking about using this product to satisfy the rural RDOF grants, which is puzzling since the transmission distance is only a quarter-mile and also requires great line-of-sight. But expect to see this product pop up in small towns or retail districts all over the country.

Cable company technology has also improved over the last three years. During that time, a lot of urban areas saw the upgrade to DOCSIS 3.1 with download speeds now up to a gigabit. CableLabs also recently announced DOCSIS 4.0 which will allow for symmetrical gigabit plus speeds but which won’t be available for 3-5 years.

While you never hear much about it, DSL technology over copper has gotten better. There are new versions of G.Fast being used to distribute broadband inside apartment buildings that is significantly better than what was on the market three years ago.

Interestingly, the product that got the most hype during the last three years is 5G. If you believe the advertising, 5G is now everywhere. The truth is that there is no actual 5G yet in the market yet and this continues to be marketing hype. The cellular carriers have improved their 4G networks by overlaying new spectrum, but we’re not going to see 5G improvements for another 3-5 years. Unfortunately, I would bet that the average person on the street would say that the biggest recent telecom breakthrough has been 5G, which I guess shows the power of advertising and hype.