Did Broadband Deregulation Save the Internet?

Something has been bothering me for several months, and that usually manifests in a blog at some point. During the COVID-19 crisis, the FCC and big ISPs have repeatedly said that the only reason our networks weathered the increased traffic during the pandemic was due to the FCC’s repeal of net neutrality and deregulation of the broadband industry. Nothing could be further from the truth.

The big increase in broadband traffic was largely a non-event for big ISPs. Networks only get under real stress during the busiest times of the day. It’s during these busy hours when network performance collapses due to networks being overloaded. There was a big increase in overall Internet traffic during the pandemic, but the busy hour was barely affected. The busy hour for the Internet as a whole is mid-evenings when the greatest number of homes are watching video at the same time. Every carrier that discussed the impact of COVID-19 said that the web traffic during the evening busy-hour didn’t change during the pandemic. What changed was a lot more usage during the daytime as students took school classes from home and employees worked from home. Daytime traffic increased, but it never grew to be greater than the evening traffic. As surprising as that might seem to the average person, ISP networks were never in any danger of crashing – they just got busier than normal during the middle of the day, but not so busy as to threaten any crashes of the Internet. The big ISPs are crowing about weathering the storm when their networks were not in any serious peril.

It’s ironic to see the big ISPs taking a victory lap about their performance during the pandemic because the pandemic shined a light on ISP failures.

  • First, the pandemic reminded America that there are tens of millions of rural homes that don’t have good broadband. For years the ISPs argued that they didn’t invest in rural America because they were unwilling to invest in an overregulated environment. The big ISPs all promised they would increase investment and hire more workers if they were deregulated. That was an obvious lie, since the big ISPs like Comcast and AT&T have cut investments since the net neutrality appeal, and collectively the big ISPs have laid off nearly 100,000 workers since then. The fact is that the big ISPs haven’t invested in rural broadband in decades and even 100% deregulation is not enough incentive for them to do so. The big ISPs wrote off rural America many years ago, so any statements they make to the contrary are purely rhetoric and lobbying.
  • The pandemic also highlighted the stingy and inadequate upload speeds that most big ISPs offer. This is the broadband crisis that arose during the pandemic that the big ISPs aren’t talking about. Many urban homes that thought they had good broadband were surprised when they had trouble moving the office and school to their homes. The problem was not with download speeds, but with the upload speeds needed to connect to school and work servers and to talk all day on video chat platforms – activities that rely on a solid and reliable upload speed. Homes have reacted by migrating to fiber when it is available. The number of households that subscribe to gigabit broadband doubled from December 2019 to the end of March 2020.

The big ISPs and the FCC have also made big political hay during the crisis about the Keep America Connected Pledge where ISPs promised to not disconnect homes for non-payment during the pandemic. I’m pretty sure the ISPs will soon go silent on that topic because soon the other shoe is going to drop as the ISPs expect homes to catch up on those ‘excused’ missed payments if they want to keep their home broadband. It’s likely that millions of homes that ran out of money due to losing their jobs will soon be labeled as deadbeats by the ISPs and will not be let back onto the broadband networks until they pay their outstanding balance, including late fees and other charges.

The shame of the Keep America Connected Pledge was that it had to be voluntary because the FCC destroyed its ability to regulate ISPs in any way. The FCC has no tools left in the regulatory quiver to deal with the pandemic after it killed Title II regulation of broadband.

I find it irksome to watch an industry that completely won the regulatory battle keep acting like it is under siege. The big ISP lobbyists won completely and got the FCC to neuter itself, and yet the big ISPs miss no opportunity to keep making the same false claims they used to win the regulation fight.

It’s fairly obvious that the big ISPs are already positioning themselves to fight off the time when the regulatory pendulum swings the other way. History has shown us that monopoly overreach always leads to a reaction from the public that demands stronger regulation. It’s in the nature of all monopolies to fight against regulation – but you’d think the ISP industry could come up with something new rather than to repeat the same lame arguments they’ve been making for the last decade about how overregulation is killing them.

Starry Back in the News

I’ve written about Starry several times since they first tried to launch in 2016. Their first market launch was a failure and it seems that the technology of beaming broadband to windows in apartment units never worked as planned. Since then the company has regrouped and now is using a business plan of connecting to the roofs of apartment buildings using millimeter wave radio. This is the same business plan pursued by Webpass, which was purchased by Google, although the technology and spectrum are different.

Starry was founded by Chet Kanojia who was also the founder of Aereo – the company that tried to deliver affordable local programming in cities through a wireless connection. Starry originally launched in Boston but has recently added Los Angeles, New York City, Denver, and Washington, D.C.

Starry is still advertising a simple product set – $50 per month for 200 Mbps symmetrical broadband. There’s a $50 install fee and then no add-ons or extra charges on top of the $50 rate. This easily beats the prices of the big cable companies or of Verizon FiOS. Starry is likely filling a competitive void in New York City where Verizon has still failed to connect broadband to thousands of high rises and millions of potential subscribers.

Starry is advertising ease of use along with low prices. Once a building is added to the Starry network they promise to install a customer at a scheduled time rather than providing a 4-6 hour window like their landline competition. Their web site doesn’t discuss the technology used to reach buildings, but it says they use existing building wiring. G.Fast is likely being used to deliver the technology over telephone wiring inside the building since there is no easy way to share coaxial cable if a customer is still buying cable TV. That would also explain how they can promise fast hook-ups since every unit in a high rise would typically already have telephone wiring.

Starry may be planning for faster speeds in the future since they were one of the largest buyers of spectrum in the 2019 auction for 24 GHz spectrum. Starry still advertises that they use phased-array antennas. This technology allows a single antenna radiator to transmit at different phases of the same frequency. This is one of the easiest ways to ‘steer’ the direction of the signal and Starry uses this technology to accomplish beamforming. What that means in a busy urban environment is that Starry can deliver more bandwidth to a rooftop than a traditional transmitter antenna.

Interestingly, the company doesn’t claim to be delivering 5G, as is every other wireless provider. This should provide a good example, that millimeter wave spectrum does not automatically equate to 5G. Starry says they are still using the simpler and cheaper 802.11 WiFi standards within the broadband path.

MoffettNathanson recently said they were bullish on the Starry model. Even though the company currently has a relatively small number if customers, their goal of chasing 30% of the urban high-rise market seems credible to the analysts. Starry’s technology can deliver broadband all across an urban downtown from one or two big tower transmitters. That contrasts with Verizon’s 5G technology that delivers fast bandwidth from small cells that must be within 1,000 feet of a home. MoffettNathanson did caution that Starry’s business plan is likely not replicable in the suburbs or smaller towns – but there are a lot of potential customers sitting in high rises in the urban centers of the country.

This kind of competition adds a lot of pressure on other ISPs wanting to serve large apartment buildings in downtown areas. Verizon found the gaining entry to buildings was their key stumbling block in gaining access to buildings in Manhattan, which resulted in the company badly violating their agreement with the City to bring FiOS to everybody. A wireless company like Starry can leap over the long list of impediments that make it hard to bring wires into urban high rises – and low prices for good broadband ought to be an interesting competitive alternative for a lot of people.

Charter Asks the FCC to Allow Data Caps

In a move that was probably inevitable, Charter has petitioned the FCC to allow the company to begin implementing broadband data caps. Charter has been prohibited from charging data caps as part of an agreement with the FCC when the agency approved the merger with Time Warner Cable in 2016. Charter is also asking the FCC to lift another provision of the merger agreement that prohibits the company from imposing interconnection fees on Netflix and other companies that generate large amounts of web data.

There was one other requirement of the original merger agreement that the FCC already modified in 2017. Charter had voluntarily agreed to pass 2 million new homes within five years of the merger agreement. The original agreement with the FCC required Charter to compete against other cable companies, but in 2017 that was changed to require Charter to instead pass 2 million new homes.

The merger agreement between the FCC and Charter is in effect until May 2023, but the original deal allowed Charter to ask to be relieved of the obligations after four years, which is the genesis of this request. If granted, the two changes would occur in May 2021.

Charter is asking to lift these restrictions now because the original order allowed them to do so this year. There seems a decent likelihood that the FCC will grant the requests since both Chairman Ajit Pai and Commissioner Michael O’Rielly voted against these merger conditions in 2016 and said the restrictions were too harsh.

What I find interesting is that Charter has been bragging to customers for the last four years about how they are the large ISP that doesn’t impose burdensome data caps on customers. This has likely given them a marketing edge in markets where the company competes against AT&T, which aggressively bills data caps.

Charter has to be jealous of the huge dollars that Comcast and AT&T are receiving from data caps. Back in 2016, there were not many homes that used more data than the 1 terabyte cap that AT&T and Comcast place on customers. However, home broadband usage has exploded, even before the COVID-19 pandemic.

OpenVault reported in early 2018 that the average home used 215 gigabytes of data per month. By the end of 2019, the average home usage had grown to 344 GB monthly. During the pandemic, by the end of March 2020, the average home used 402 GB.

What’s more telling is the percentage of homes that now use a terabyte of data per month. According to OpenVault, that’s now more than 10% of homes – including nearly 2% of homes that use more than 2 terabytes. Just a few years ago only a tiny percentage of homes used a terabyte per month of data. Charter has undoubtedly been measuring customer usage and knows the revenue potential from imposing data caps similar to Comcast or AT&T. If Charter can charge $25 for exceeding the data caps, with their 27 million customers the data caps would increase revenues by over $800 million annually – for usage they are already carrying on their network. Charter, like all of the big ISPs, crowed loudly that their networks were able to easily handle the increase in traffic due to the pandemic. But that’s not going to stop them from milking more money out of their biggest data users.

The US already has some of the most expensive broadband in the world. The US landline broadband rates are twice the rates in Europe and the Far East. The US cellular data rates rival the rates in the most expensive remote countries in the world. Data caps imposed by landline and cellular ISPs add huge amounts of margin straight to the bottom lines of the big ISPs and wireless carriers.

What saddest about all of this is that there is no regulation of ISPs and they free to charge whatever they want for broadband. Even in markets where we see a cable company facing competition with fiber from one of the telcos, there is seemingly no competition on price. Verizon, AT&T, and CenturyLink fiber cost roughly the same in most markets as broadband from cable companies, and the duopoly players in such markets gladly split the customers and the profits for the benefit of both companies.

I’ve written several blogs arguing against data caps and I won’t repeat the whole argument. The bottom line is that it doesn’t cost a big ISP more than a few pennies extra to provide service to a customer that uses a terabyte per month at home compared to a home that uses half that. Data cap revenue goes straight to the bottom line of the big ISPs. For anybody that doesn’t believe that, watch the profits at Charter before and after the day when they introduce data caps.

Verizon Restarts Wireless Gigabit Broadband Roll-out

After a two-year pause, Verizon has launched a new version of its fixed wireless access (FWA) broadband, launching the service in Detroit. Two years ago, the company launched a trial version of the product in Sacramento and a few other cities and then went quiet about the product. The company is still touting this as a 5G product, but it’s not and using millimeter wave radios to replace the fiber drop in a fiber network. For some reason, Verizon is not touting this as fiber-to-the-curb, meaning the marketing folks at the company are electing to stress 5G rather than the fiber aspect of the technology.

Verizon has obviously been doing research and development work and the new wireless product looks and works differently than the first-generation product. The first product involved mounting an antenna on the outside of the home and then drilling a hole for fiber to enter the home. The new product has a receiver mounted inside a window that faces the street. This receiver connects wirelessly with a home router that looks a lot like an Amazon Echo which comes enabled with Alexa. Verizon is touting that the new product can be self-installed, as is demonstrated on the Verizon web page for the product.

Verizon says the FWA service delivers speeds up to a gigabit. Unlike with fiber, that speed is not guaranteed and is going to vary by home depending upon issues like distance from the transmitter, foliage, and other local issues. Verizon is still pricing this the same as two years ago – $50 per month for customers who buy Verizon wireless products and $70 per month for those who don’t. It doesn’t look like there are any additional or hidden fees, which is part of the new billing philosophy that Verizon announced in late 2019.

The new product eliminates one of the controversial aspects of the first-generation product. Verizon was asking customers to sign an agreement that they could not remove the external antenna even if they dropped the Verizon service. The company was using external antennas to bounce signals to reach additional homes that might have been out of sight of the transmitters on poles. With units mounted inside of homes that kind of secondary transmission path is not going to be possible. This should mean that the network won’t reach out to as many homes.

Verizon is using introductory pricing to push the product. Right now, the web is offering three months of free service. This also comes with a year of Disney+ for free, Stream TV for free, and a month of YouTube TV for free.

The router connects to everything in the home wirelessly. The wireless router comes with WiFi 6, which is not much of a selling point yet since there are practically no devices in homes that can yet use the new standard – but over time this will become the standard WiFi deployment. Customers can buy additional WiFi extenders for $200 if needed. It’s hard to tell from the pictures if the router unit has an Ethernet jack.

From a network perspective, this product still requires Verizon to build fiber in neighborhoods and install pole-mounted transmitters to beam the signal into homes. The wireless path to the home is going to require a good line-of-sight, but a customer only needs to find one window where this will work.

From a cost perspective, it’s hard to see how this network will cost less than a standard fiber-to-the-home network. Fiber is required on the street and then a series of transmitters must be installed on poles. For the long run operations of the network, it seems likely that the pole-mounted and home units will have to be periodically replaced, meaning perhaps a higher long-term operational cost than FTTH.

Interestingly, Verizon is not mentioning upload speeds. The pandemic has taught a lot of homes how important upload speeds are, Upload speed is currently one of the biggest vulnerabilities of cable broadband and I’m surprised to not see Verizon capitalize on this advantage for the product – that’s probably coming later.

Verizon says they still intend to use the technology to pass 30 million homes – the same goal they announced two years ago. Assuming they succeed, they will put a lot of pressure on the cable companies – particularly with pricing. The gigabit-range broadband products from Comcast and Charter cost $100 or more while the Verizon FWA product rivals the prices of the basic broadband products from the cable companies.

Many Libraries Still Have Slow Broadband

During the recent pandemic, a lot of homes came face-to-face with the realization that their home broadband connection is inadequate. Many students trying to finish the school year and people trying to work from home found that their broadband connection would not allow them to connect and maintain connections to school and work servers. Even families who thought they had good broadband found that they were unable to maintain multiple connections for these purposes.

The first thing that many people did when they found that their home broadband wasn’t adequate was to search for some source of public broadband that would enable them to handle their school or office work. Even in urban areas this wasn’t easy, since most of the places with free broadband, such as coffeeshops were closed and didn’t have the broadband connected to deliver meager broadband for those willing to sit outside.

School officials scrambled and were able in many cases to quickly activate broadband from schools, which in most places have robust broadband. Local government supplemented this with ideas like putting cellular hot spots on school buses and parking them in areas with poor broadband.

I’m sure that one of the first places that those without broadband tried was the local small-town libraries. Unfortunately, a lot of libraries in rural areas suffer from the same poor broadband as everybody else in the area.

The FCC established a goal in 2014 for library broadband in the E-Rate Modernization Order, setting a goal of having at least 100 Mbps broadband to every library serving a community of less than 50,000 people, The goal for libraries serving larger communities was set at a gigabit. Unfortunately, many libraries still don’t have good broadband.

In just the last few months, I’ve been working with rural communities where rural libraries get their broadband from cellular hot spots or slow rural DSL connections. It’s hard to imagine being a broadband hub for a community if a library has a 3 to 5 Mbps broadband connection. Libraries with these slow connections gamely try to share the bandwidth with the public – but it obviously barely works. To rub salt in the wounds, some of these slow connections are incredibly expensive. I talked to a library just a few weeks ago that was spending over $500 per month for a dedicated 5 Mbps broadband connection using a cellular hotspot.

The shame of all of this is that the federal funding is available through the E-Rate and a few other programs to try to get better broadband for libraries. Some communities haven’t gotten this funding because nobody was willing to slog through the bureaucracy and paperwork to make it happen.  But in most cases, rural libraries don’t have good broadband because it’s not available in many small rural towns. It would require herculean funding to bring fast broadband to a library in a town where nobody else has broadband.

This is not to say that all rural libraries don’t have good broadband. Some are connected by fiber and have gigabit connections. In many cases these connections are made as part of fiber networks that connect schools or government buildings. These ‘anchor institution’ networks solve the problem of poor broadband in the schools and libraries, but almost always are prohibited from sharing that bandwidth with the homes and businesses in the community.

Of course, there are rural libraries that have good broadband because somebody built a fiber network to connect the whole community. In most cases that means a rural telephone company or telephone cooperative. More recently that might mean an electric cooperative. These organizations bring good broadband to everybody in the community – not just to anchor institutions. Even in these communities the libraries serve a vital role since they can provide WiFi for those that can’t afford to buy the subscription to fiber broadband. Most schools and libraries have found ways to turn the WiFi towards parking lots, and all over rural America there have been daily swarms of cars parked all day where there is public WiFi.

Ultimately, the problems with library broadband are a metaphor for the need for good rural broadband for everybody. Society is not served well when people park all day in a parking lot just to get a meager broadband connection to do school or office work. Folks in rural communities who have suffered through this pandemic are not going to forget it, and local and state politicians better listen to them and help find better broadband solutions.

How Will Cable Companies Cope with COVID-19?

A majority of households today buy broadband from cable companies that operate hybrid coaxial fiber networks (HFC) that us some version of DOCISIS technology to control the networks. The largest cable companies have upgraded most of their networks to DOCSIS 3.1 that allows for gigabit download speeds.

The biggest weakness in the cable networks is the upload data links. The DOCSIS standard limits the upload path to me no larger than 1/8th of the total bandwidth uses – but it’s not unusual for the cable companies to make this path even smaller and offer products like 100/10 Mbps where the upload is 1/11th of the total bandwidth provided to customers.

This is not a new concern for the cable companies and the engineering folks at Comcast and other big cable companies have been discussing ways to improve upload bandwidth for much of the last decade. They understood that the need for uploading would someday overwhelm the bandwidth path provided – they just didn’t expect to get there so explosively as been done in reaction to the COVID-19 crisis.

Every student and employee trying to work from home is carving out an uploaded VPN when they connect to a school or work server. Customers are also using significant upload bandwidth when they join a video call on Zoom or other platforms. While carriers report 30–40% overall increases in traffic due to COVID-19, they are not disclosing that a lot of that increase is demand for uploading.

Cable companies are now faced with solving the upload crisis. Practically every prognosticator in the country is predicting that we’re not going to return to pre-COVID behavior. There is likely to be a lot of people who will continue to work from home. While students will return to the classroom eventually, this grand experiment has shown that’s it’s feasible to involve students in the classroom remotely, and so school systems are likely to continue this practice for students with long-term illnesses or other reasons why they can’t always be in the classroom. Finally, we’ve taught a whole generation of people that video meetings can work, so there is going to be a whole lot more of that. The day of traveling to attend a few hour meeting might be over.

There is one other interesting fact to consider when looking at a cable company upload data path. Cable companies have generally devalued the upload path quality and have assigned the upload path to the low frequencies on the cable network spectrum. Historically upload data speeds were provisioned on the 5-42 MHz range of spectrum. This is the spectrum in a cable system that experiences the most interference from things like microwave ovens, vacuum cleaners and passing large trucks. Cable companies could get away with this because historically most people didn’t care if it took longer to upload a file or if packets had to be retransmitted due to interference. But people connecting to WANs and video conferences care about the upload quality as well as speed.

One solution, and something that some cable providers have already done is to do what is called a mid-split upgrade that extend the spectrum for uploading to the 5-85 MHz band. This still includes a patch of the worst spectrum inside the cable system, but is a significant boost in the amount of upload broadband available. Depending upon the settop boxes being used, this upgrade can require some new customer boxes.

Another idea is to do more traditional node splits, meaning to reduce the number of customers included in a neighborhood node. Traditionally, node splits were done to improve the performance of download speeds – this was the fastest way to relieve network congestion when a local neighborhood network bogged down unduly in the evening. It’s an interesting idea to consider splitting nodes to relive pressure on the upload data path.

After those two idea the upgrades get expensive. Migrating to switched digital video could free up a mountain of system bandwidth which would allow for a larger data path, including an enlarged upload path. The downside of this kind of upgrade is that it moves outside of the DOCSIS technology and starts to look more like providing Ethernet over fiber. This is not just a forklift upgrade it changes the basic way the network operates.

The final way to get more upload speed would be an upgrade to the upcoming DOCSIS 4.0 standard. Everything I read about this makes it sound expensive. But the new standard would allow for nearly symmetrical data services and would let cable network broadband compete head-on with fiber network. It will be interesting to see if the cable companies view the upload crisis as bad enough to warrant spending huge amounts of money to fix the problem.

Big Regional Network Outages

T-Mobile had a major network outage last week that cut off some voice calls and most texting for nearly a whole day. The company’s explanation of the outage was provided by Neville Ray, the president of technology.

The trigger event is known to be a leased fiber circuit failure from a third party provider in the Southeast. This is something that happens on every mobile network, so we’ve worked with our vendors to build redundancy and resiliency to make sure that these types of circuit failures don’t affect customers. This redundancy failed us and resulted in an overload situation that was then compounded by other factors. This overload resulted in an IP traffic storm that spread from the Southeast to create significant capacity issues across the IMS (IP multimedia Subsystem) core network that supports VoLTE calls.

In plain English, the electronics failed on a leased circuit, and then the back-up circuit also failed. This then caused a cascade that brought down a large part of the T-Mobile network.

You may recall that something similar happened to CenturyLink about two years ago. At the time the company blamed the outage on a bad circuit card in Denver that somehow cascaded to bring down a large swatch of fiber networks in the West, including numerous 911 centers. Since that outage, there have been numerous regional outages, which is one of the reasons that Project THOR recently launched in Colorado – the cities in that region could no longer tolerate the recurring multi-hour or even day-long regional network outages,

Having electronics fail is a somewhat common event. This is particularly true on circuits provided by the big carriers which tend to push the electronics to the max and keep equipment running to the last possible moment of its useful life. Anybody visiting a major telecom hub would likely be aghast at the age of some of the electronics still being used to transmit voice and data traffic.

I can recall two of my clients that have had similar experiences in the last few years. They had a leased circuit fail and then also saw the redundant path fail as well. In both cases, it turns out that the culprit was the provider of the leased circuits, which did not provide true redundancy. Although my clients had paid for redundancy, the carrier had sold them primary and backup circuits that shared some of the same electronics at ley points in the network – and when those key points failed their whole network went down.

However, what is unusual about the two big carrier outages is that the outages somehow cascaded into big regional outages. That was largely unheard of a decade ago. This reminds more of what we saw in the past in the power grid, when power outages in one town could cascade over large areas. The power companies have been trying to remedy this situation by breaking the power grid into smaller regional networks and putting in protection so that failures can’t overwhelm the interfaces between regional networks. In essence, the power companies have been trying to introduce some of the good lessons learned over time by the big telecom companies.

But it seems that the big telecom carriers are going in the opposite direction. I talked to several retired telecom network engineers and they all made the same guess about why we are seeing big regional outages. The telecom network used to be comprised of hundreds of regional hubs. Each hub had its own staff and operations and it was physically impossible for a problem from one hub to somehow take down a neighboring hub. The worst that would happen is that routes between hubs could go dark, but the problem never moved past the original hub.

The big telcos have all had huge numbers of layoffs over the last decade, and those purges have emptied out the big companies of the technicians that built and understood the networks. Meanwhile, the companies are trying to find efficiencies to get by with smaller staffing. It appears that the efficiencies that have been found are to introduce network solutions that cover large areas or even the whole nation. This means that the identical software and technicians are now being used to control giant swaths of the network. This homogenization and central control of a network means that failure in any one place in the network might cascade into a larger problem if the centralized software and/or technicians react improperly to a local outage. It’s likely that the big outages we’re starting to routinely see are caused by a combination of the  failure of people and software systems.

A few decades ago we somewhat regular power outages that affected multiple states. At the prodding of the government, the power companies undertook a nationwide effort to stop cascading outages, and in doing so they effectively emulated the old telecom network world. They ended the ability for an electric grid to automatically interface with neighboring grids and the last major power outage that wasn’t due to weather happened in the west in 2011.

I’ve seen absolutely no regulatory recognition of the major telecom outages we’ve been seeing. Without the FCC pushing the big telcos, it’s highly likely nothing will change. It’s frustrating to watch the telecom networks deteriorate at the same time that electric companies got together and fixed their issues.

Verizon’s Network Performance

Verizon has been posting a weekly report of how COVID-19 has been impacting their network. The weekly blogs are rather short on facts and it’s clear that the intent of this weekly report is to put investors at ease that the company’s networks are coping with the burst of traffic that has come as a result of the pandemic. With that said, the facts that are discussed are interesting.

Verizon lead off the weekly entry for 5/21 saying that voice and text traffic are starting to return to pre-COVID levels. On the most recent Monday Verizon saw 776 million voice calls, down from 860 million calls at the peak of COVID-19. That falls under the category of interesting fact, but heavier telephone call volumes are not the cause of undue stress on the Verizon network. Telephone calls use tiny amounts of broadband – 64 kbps. Thirty telephone calls will fit into the same-size data path as one Netflix stream. Additionally, once voice calls reach a Verizon hub, telephone calls are routed using a separate public switched telephone network PSTN to transport calls across the country. Text messages use much less data than a telephone call and are barely noticed on telco networks.

The bigger news is that some other traffic is staying at elevated levels. Verizon reported for 5/21 that gaming is still up 82% over pre-pandemic levels and VPN connections used to connect to school and work servers are up 72%. The use of collaborative tools like Zoom and Go-to-Meeting are up ten times over pre-COVID levels (1,000%).

One of the more interesting statistics is that network mobility (people driving or walking and switching between cell towers) has increased in recent weeks and that one-third of states now have higher levels of mobility than pre-COVID. At first that’s a little hard to believe until you realize that in pre-COVID time students and employees were largely stationary at the school or office much of the day – any roaming by stay-at-home people is an increase.

Reading back through the weekly statistics shows that most web activities are at higher levels than pre-COVID. Fir example, in the 4/22 report the volumes of downloading, gaming, video usage, VPNs, and overall web traffic were higher than normal, with the only decrease being the volumes used for social media.

What none of these reports talk about is the stress put on the Verizon networks. It’s easy in reading these reports to forget that Verizon wears many hats and operates many networks. They are still a regulated telco in the northeast and still have a lot of telephone customers. That also means they still operate a sizable DSL network. The company, through Verizon FiOS is still the largest fiber-to-the-home provider. The company also owns and extensive enterprise and long-haul fiber network. Verizon also operates one of the largest cellular networks in the world.

When Verizon says all is well, they can’t mean that for each of these networks. The web is full right now of complaints from DSL customers (Verizon’s and other big telcos) complaining how inadequate DSL is for working at home. The Verizon DSL network was already overstressed in evenings and has to be near the point of collapse due to the big increases in VPNs and collaboration connections. Any Verizon DSL customer reading this Verizon blog that says everything is fine is probably spitting fire.

By contrast, Verizon’s FiOS networks are likely handing the pandemic traffic with ease. Verizon FTTH products have offered symmetrical data for years, with the upload data path was lightly utilized. The big uptick in VPN connections and collaboration connections ought to be handled well in that network. Any glitches might come from older FiOS neighborhoods where the backhaul oaths out of neighborhoods are too small.

What Verizon or AT&T haven’t talked about is the different impact on their various networks. For example, what’s the overall change in data usage on their cellular networks compared to other networks? The big telcos have been moot on this kind of detail, because admitting that some of the networks are handing the pandemic well might lead to an admission that other parts of the company are not doing so well. Instead we get the very generic story of how everything is fine with the company and their networks.

These companies probably do not have any obligations to report about their various networks in detail. Verizon DSL customers don’t need company pronouncements to know that their broadband experience has nearly collapsed since the pandemic. FiOS customers are likely happy that their broadband has weathered the storm. One of these days I’ll hopefully have a beer with some Verizon engineer who can tell me what really happened – both good and bad – behind the scenes.

Easing Fiber Construction

Almost every community wants fiber broadband, but I’ve found that there are still a lot of communities that have ordinances or processes in place that add cost and time to somebody trying to build fiber. One of the tasks I always ask cities to undertake is to do an internal review of all of the processes that apply to somebody who wants to build fiber, to identify areas that an ISP will find troublesome. Such a review might look at the following:

Permitting. Most cities have permitting rules to stop companies from digging up the streets at random, and ISPs expect to have to file permits to dig under streets or to get onto city-owned utility poles. However, we’ve run into permitting issues that were a major hindrance to building fiber.

  • One of my clients wanted to hang fiber on city-owned poles and found out that the city required a separate permit for each pole. The paperwork involved with that would have been staggering.
  • We worked in another city where the City wanted a $5,000 non-refundable fee for each new entity wanting to do business in the city. Nobody at the City could recall why the fee was so high and speculated that it was to help deter somebody in the past that they didn’t want working in the city.
  • I’ve seen a number of cities that wanted a full set of engineering drawings for the work to be done and expected no deviance from the plans. Very few ISPs do that level of engineering up front and instead have engineers working in front of construction crews to make the final calls on facility placement as the project is constructed.

Rights-of-Way. Cities and counties own the public rights-of-way on the roads under their control. Most cities want fiber badly enough to provide rights-of-way to somebody that is going to build fiber. But we’ve seen cities that have imposed big fees on getting rights-of-way or who want sizable annual payments for the continued use of the rights-of-way.

I’ve seen fiber overbuilders bypass towns that overvalue the rights-of-way. Many cities are desperate for tax revenues and assume anybody building fiber can afford high up-front fees or an ongoing assessment. These cities fail to realize that most fiber business plans have slim margins and that high fees might be enough to convince an ISP to build somewhere else.

Work Rules. These are rules imposed by a city that require work to be done in a certain way. For example, we’ve seen fiber projects in small towns that required flagmen to always be present even though the residential streets didn’t see more than a few cars in an afternoon. That’s a lot of extra cost added to the construction cost that most builders would view as unnecessary.

We’ve seen some squirrelly rules for work hours. Many cities don’t allow work on Saturdays, but most work crews prefer to work 6-day weeks. We’ve seen work hours condensed on school days that only allow construction during the hours that school is in session, such as 9:00 to 2:00. Anybody who has set up and torn down a boring rig knows that this kind of schedule will cut the daily feet of boring in half.

Timeliness.  It’s not unusual for cities to be slow for tasks that involve City staff. For example, if a City does their own locates for buried utilities it’s vital that they perform locates on a timely basis so as to not idle work crews. In the most extreme case, I’ve seen locate put on hold while the person doing the locates went on a long vacation.

We’ve also seen cities that are slow on inspecting sites after construction. Fiber work crews move out of a neighborhood or out of the town when construction is complete, and cities need to inspect the roads and poles while the crews are still in the market.

In many cases, the work practices in place in a city are not the result of an ordinance but were created over time in reaction to some past behavior of other utilities. In other cases, some of the worst practices are captured in ordinances that likely came about when some utility really annoyed the elected officials in the past. A city shouldn’t roll over and relax all rules for a fiber builder because such changes will be noticed by the other utilities that are going to want the same treatment. But cities need to eliminate rules that add unnecessary cost to bringing fiber.

We always caution ISPs to not assume that construction rules in a given community will be what is normally expected. It’s always a good idea to have a discussion with a city about all of the various rules long before the fiber work crews show up.