Many Libraries Still Have Slow Broadband

During the recent pandemic, a lot of homes came face-to-face with the realization that their home broadband connection is inadequate. Many students trying to finish the school year and people trying to work from home found that their broadband connection would not allow them to connect and maintain connections to school and work servers. Even families who thought they had good broadband found that they were unable to maintain multiple connections for these purposes.

The first thing that many people did when they found that their home broadband wasn’t adequate was to search for some source of public broadband that would enable them to handle their school or office work. Even in urban areas this wasn’t easy, since most of the places with free broadband, such as coffeeshops were closed and didn’t have the broadband connected to deliver meager broadband for those willing to sit outside.

School officials scrambled and were able in many cases to quickly activate broadband from schools, which in most places have robust broadband. Local government supplemented this with ideas like putting cellular hot spots on school buses and parking them in areas with poor broadband.

I’m sure that one of the first places that those without broadband tried was the local small-town libraries. Unfortunately, a lot of libraries in rural areas suffer from the same poor broadband as everybody else in the area.

The FCC established a goal in 2014 for library broadband in the E-Rate Modernization Order, setting a goal of having at least 100 Mbps broadband to every library serving a community of less than 50,000 people, The goal for libraries serving larger communities was set at a gigabit. Unfortunately, many libraries still don’t have good broadband.

In just the last few months, I’ve been working with rural communities where rural libraries get their broadband from cellular hot spots or slow rural DSL connections. It’s hard to imagine being a broadband hub for a community if a library has a 3 to 5 Mbps broadband connection. Libraries with these slow connections gamely try to share the bandwidth with the public – but it obviously barely works. To rub salt in the wounds, some of these slow connections are incredibly expensive. I talked to a library just a few weeks ago that was spending over $500 per month for a dedicated 5 Mbps broadband connection using a cellular hotspot.

The shame of all of this is that the federal funding is available through the E-Rate and a few other programs to try to get better broadband for libraries. Some communities haven’t gotten this funding because nobody was willing to slog through the bureaucracy and paperwork to make it happen.  But in most cases, rural libraries don’t have good broadband because it’s not available in many small rural towns. It would require herculean funding to bring fast broadband to a library in a town where nobody else has broadband.

This is not to say that all rural libraries don’t have good broadband. Some are connected by fiber and have gigabit connections. In many cases these connections are made as part of fiber networks that connect schools or government buildings. These ‘anchor institution’ networks solve the problem of poor broadband in the schools and libraries, but almost always are prohibited from sharing that bandwidth with the homes and businesses in the community.

Of course, there are rural libraries that have good broadband because somebody built a fiber network to connect the whole community. In most cases that means a rural telephone company or telephone cooperative. More recently that might mean an electric cooperative. These organizations bring good broadband to everybody in the community – not just to anchor institutions. Even in these communities the libraries serve a vital role since they can provide WiFi for those that can’t afford to buy the subscription to fiber broadband. Most schools and libraries have found ways to turn the WiFi towards parking lots, and all over rural America there have been daily swarms of cars parked all day where there is public WiFi.

Ultimately, the problems with library broadband are a metaphor for the need for good rural broadband for everybody. Society is not served well when people park all day in a parking lot just to get a meager broadband connection to do school or office work. Folks in rural communities who have suffered through this pandemic are not going to forget it, and local and state politicians better listen to them and help find better broadband solutions.

Our Uneven Regulatory Environment

I think everybody would agree that broadband is a far more important part of the American economy than landline telephone service. While something in the range of 35% of homes still have a landline, almost every home has or wants a broadband connection. If you knew nothing about our regulatory history in the U.S., you would guess that the FCC would be far more involved with broadband issues than landline telephone issues – but they’re not. Consider some of the recent regulatory actions at the FCC as evidence of how regulation is now unbalanced and mostly looks at voice issues.

Recently the FCC took action against Magic Jack VocalTec Ltd. The FCC reached a settlement with MagicJack to pay $5 million in contributions to the Universal Service Fund. MagicJack also agreed to implement a regulatory compliance plan to stay in compliance with FCC rules.

The contributions to the Universal Service Fund come from a whopping 26.5% tax on the interstate portion of telephone service, and MagicJack has refused for years to make these payments. MagicJack has been skirting FCC rules for years – which is what allows them to offer low-price telephone service.

The FCC also recently came down hard on telcos that are making a lot of money by billing excessive access charges for calls to service like Free Conference Calling.com and chat lines. These services made arrangements with LECs that are remote and that bill access on a lot of miles of fiber transport. The FCC ruled that these LECs were ‘access stimulators’ and that the long-distance companies and their customers were unfairly subsidizing free conference calling. In one of the fastest FCC reactions I can recall, just a few months after the initial ruling the FCC also published orders denying appeals to that order.

From a regulatory perspective, these kinds of actions are exactly the sort of activity one would expect out of a regulatory agency. These two examples are just a few out of a few dozen actions the FCC has taken in the last few years in their regulation of landline telephone service. The agency has been a little less busy, but also looked at cable TV issues over the last year.

Contrast this with broadband, which any person on the street would think would be the FCC’s primary area of regulation. After all, broadband is the far most important communications service and affects far more homes and businesses than telephone service or cable TV service.  But the regulatory record shows a real dearth of action in the area of broadband regulation.

In December 2019 Congress passed the Television Viewer Protection Act that prohibits ISPs and cable companies from billing customers for devices that the customer owns. It’s odd that a law would even be needed for something so commonsense, but Frontier and some cable companies have been billing customers for devices that were sold previously to customers. In one example that has gotten a lot of press, Frontier has been billing customers a $10 fee for a router that customers purchased from Verizon before Frontier bought the property.

Frontier appealed the immediate implementation of the new law to the FCC. The telco said that due to COVID-19 the company is too busy to change its practices and asked to be able to continue the overbilling until the end of this year. In a brave regulatory move in April, the FCC agreed with Frontier and will allow them to continue to overbill customers for such devices until the end of 2020.

I was puzzled by this ruling for several reasons. From a practical perspective, the regulators in the U.S. have normally corrected carrier wrongs by ordering refunds. It’s impossible to believe that Frontier couldn’t make this billing change, with or without COVID. But even if it takes them a long time to implement it, the normal regulatory remedy is to give customers back money that was billed incorrectly. Instead, the FCC told Frontier and cable companies that they could continue to rip off customers until the end of the year, in violation of the intent of the law written by Congress.

A more puzzling concern is why the FCC even ruled on this issue. When the agency killed Title II regulation, they also openly announced that they have no regulatory authority over broadband. My first thought when reading this order was to wonder if the FCC even has jurisdiction any longer to rule on issues like data modems. However, in this case, the Congress gave them the narrow authority to rule on issues related to this specific law. As hard as the FCC tries, these little nagging broadband issues keep landing in their lap – because there is no other place for them to go.

In this case, the FCC dipped briefly into a broadband issue and got it 100% wrong. Rather than rule for the customers who were being billed fraudulent charges, and going against the intent of Congress that passed the law clarifying the issue – the FCC bought into the story that Frontier couldn’t fix their billing systems until a year after the law was passed. And for some reason, even after buying the story, the FCC didn’t order a full refund of past overbilling.

If we actually had light-touch broadband regulation, then the FCC would be able to weigh in when industry actors act badly, like happened in the two telephone dockets listed above. But our light-touch regulation is really no-touch regulation and the FCC has no jurisdiction over broadband except in snippets where Congress gives them a specific task. The FCC ruling is puzzling. We know they favor the big ISPs, but siding with Frontier’s decision to openly rip off customers seems like an odd place to make a pro-ISP stand. As much as I’ve complained about this FCC giving up their broadband regulatory authority – perhaps we don’t want this to be fixed until we get regulators who will apply the same standards to broadband as they are applying to telephone service.

How Will Cable Companies Cope with COVID-19?

A majority of households today buy broadband from cable companies that operate hybrid coaxial fiber networks (HFC) that us some version of DOCISIS technology to control the networks. The largest cable companies have upgraded most of their networks to DOCSIS 3.1 that allows for gigabit download speeds.

The biggest weakness in the cable networks is the upload data links. The DOCSIS standard limits the upload path to me no larger than 1/8th of the total bandwidth uses – but it’s not unusual for the cable companies to make this path even smaller and offer products like 100/10 Mbps where the upload is 1/11th of the total bandwidth provided to customers.

This is not a new concern for the cable companies and the engineering folks at Comcast and other big cable companies have been discussing ways to improve upload bandwidth for much of the last decade. They understood that the need for uploading would someday overwhelm the bandwidth path provided – they just didn’t expect to get there so explosively as been done in reaction to the COVID-19 crisis.

Every student and employee trying to work from home is carving out an uploaded VPN when they connect to a school or work server. Customers are also using significant upload bandwidth when they join a video call on Zoom or other platforms. While carriers report 30–40% overall increases in traffic due to COVID-19, they are not disclosing that a lot of that increase is demand for uploading.

Cable companies are now faced with solving the upload crisis. Practically every prognosticator in the country is predicting that we’re not going to return to pre-COVID behavior. There is likely to be a lot of people who will continue to work from home. While students will return to the classroom eventually, this grand experiment has shown that’s it’s feasible to involve students in the classroom remotely, and so school systems are likely to continue this practice for students with long-term illnesses or other reasons why they can’t always be in the classroom. Finally, we’ve taught a whole generation of people that video meetings can work, so there is going to be a whole lot more of that. The day of traveling to attend a few hour meeting might be over.

There is one other interesting fact to consider when looking at a cable company upload data path. Cable companies have generally devalued the upload path quality and have assigned the upload path to the low frequencies on the cable network spectrum. Historically upload data speeds were provisioned on the 5-42 MHz range of spectrum. This is the spectrum in a cable system that experiences the most interference from things like microwave ovens, vacuum cleaners and passing large trucks. Cable companies could get away with this because historically most people didn’t care if it took longer to upload a file or if packets had to be retransmitted due to interference. But people connecting to WANs and video conferences care about the upload quality as well as speed.

One solution, and something that some cable providers have already done is to do what is called a mid-split upgrade that extend the spectrum for uploading to the 5-85 MHz band. This still includes a patch of the worst spectrum inside the cable system, but is a significant boost in the amount of upload broadband available. Depending upon the settop boxes being used, this upgrade can require some new customer boxes.

Another idea is to do more traditional node splits, meaning to reduce the number of customers included in a neighborhood node. Traditionally, node splits were done to improve the performance of download speeds – this was the fastest way to relieve network congestion when a local neighborhood network bogged down unduly in the evening. It’s an interesting idea to consider splitting nodes to relive pressure on the upload data path.

After those two idea the upgrades get expensive. Migrating to switched digital video could free up a mountain of system bandwidth which would allow for a larger data path, including an enlarged upload path. The downside of this kind of upgrade is that it moves outside of the DOCSIS technology and starts to look more like providing Ethernet over fiber. This is not just a forklift upgrade it changes the basic way the network operates.

The final way to get more upload speed would be an upgrade to the upcoming DOCSIS 4.0 standard. Everything I read about this makes it sound expensive. But the new standard would allow for nearly symmetrical data services and would let cable network broadband compete head-on with fiber network. It will be interesting to see if the cable companies view the upload crisis as bad enough to warrant spending huge amounts of money to fix the problem.

The FCC Muddles the RDOF Grants

Last week the FCC ‘clarified’ the RDOF rules in a way that left most of the industry feeling less sure about how the auction will work.  The FCC is now supposedly taking a technologically neutral position on the auction. That means that the FCC has reopened the door for low-earth orbit satellites. Strangely, Chairman Ajit Pai said that the rules would even allow DSL or fixed wireless providers to participate in the gigabit speed tier.

Technologically neutral may sound like a fair idea, but in this case it’s absurd. The idea that DSL or fixed wireless could deliver gigabit speeds is so far outside the realm of physics as to be laughable. It’s more likely that these changes are aimed at allowing the providers of satellite, DSL, and fixed wireless providers to enter the auction at speeds faster than they can deliver.

For example, by saying that DSL can enter the auction at a gigabit, it might go more unnoticed if telcos enter the auction at the 100./10 Mbps tier. There is zero chance for rural DSL to reach those speeds – the CAF II awards six years ago didn’t result in a lot of rural DSL that is delivering even 10/1 Mbps. It’s worth remember that the RDOF funding is going to some of the most remote Census blocks in the country where homes are likely many miles from a DSL hub and also not concentrated in pockets – two factors that account for why rural DSL often has speeds that are not a lot faster than dial-up.

Any decision to allow low orbit satellites into the auction has to be political. There are members of Congress now pushing for satellite broadband. In my State of North Carolina there is even a bill in the Senate (SB 1228) that would provide $2.5 million to satellite broadband as a preferred solution for rural broadband.

The politics behind low orbit satellite broadband is crazy because there is not yet any such technology that can deliver broadband to people. Elon Musk’s satellite company currently has 362 satellites in orbit. That may sound impressive, but a functional array of satellites is going to require thousands of satellites – the company’s filed plan with the FCC calls for 4,000 satellites as the first phase deployment.

I’ve seen a lot of speculation in the financial and space press that Starlink will have a lot of challenge in raising the money needed to finish the constellation of satellites. A lot of the companies that were going to invest are now reluctant due to COVID-19. The other current competitor to Starlink is OneWeb, which went bankrupt a few months ago and may never come out of receivership. Jeff Bezos has been rumored to be launching a satellite business but still has not launched a single satellite.

The danger of letting these various technologies into the RDOF process is that a lot of rural households might again get screwed by the FCC and not get broadband after a giant FCC grant. That’s what happened with CAF II where over $9 billion was handed to the big telcos and was effectively washed down the drain in terms of any lasting benefits to rural broadband.

It’s not hard to envision Elon Musk and Starlink winning a lot of money in the CAF II auction and then failing to complete the business plan. The company has an automatic advantage over any company they are bidding against since Starlink can bid lower than any other bidder and still be ahead of the game. It’s not an implausible scenario to foresee Starlink winning every contested Census block.

Allowing DSL and fixed wireless providers to overstate their technical capacity will be just as damaging. Does anybody think that if Frontier wins money in this auction that they will do much more than pocket it straight to the bottom line? Rural America is badly harmed if a carriers wins and the RDOF money and doesn’t deliver the technology that was promised – particularly if that grant winner unfairly beat out somebody that would have delivered a faster technology. One has to only look back at the awards made to Viasat in the CAF II reverse auction to see how absurd it is when inferior technologies are allowed in the auction.

Probably the worst thing about the RDOF rules is that somebody who doesn’t deliver doesn’t have to give back all of the grant money. Even should no customer ever be served or if no customer ever receives the promised speeds, the grant winner gets to keep a substantial percentage of the grant funding.

As usual, this FCC is hiding their real intentions under the technology neutral stance. This auction doesn’t need the FCC to be ‘technology neutral’, and technologies that don’t exist yet today like LEO satellites or technologies that can’t deliver the speed tiers should not be allowed into the auction. I’m already cringing at the vision of a lot of grant winners that have no business getting a government subsidy at a time when COVID-19 has magnified the need for better rural broadband.

Big Regional Network Outages

T-Mobile had a major network outage last week that cut off some voice calls and most texting for nearly a whole day. The company’s explanation of the outage was provided by Neville Ray, the president of technology.

The trigger event is known to be a leased fiber circuit failure from a third party provider in the Southeast. This is something that happens on every mobile network, so we’ve worked with our vendors to build redundancy and resiliency to make sure that these types of circuit failures don’t affect customers. This redundancy failed us and resulted in an overload situation that was then compounded by other factors. This overload resulted in an IP traffic storm that spread from the Southeast to create significant capacity issues across the IMS (IP multimedia Subsystem) core network that supports VoLTE calls.

In plain English, the electronics failed on a leased circuit, and then the back-up circuit also failed. This then caused a cascade that brought down a large part of the T-Mobile network.

You may recall that something similar happened to CenturyLink about two years ago. At the time the company blamed the outage on a bad circuit card in Denver that somehow cascaded to bring down a large swatch of fiber networks in the West, including numerous 911 centers. Since that outage, there have been numerous regional outages, which is one of the reasons that Project THOR recently launched in Colorado – the cities in that region could no longer tolerate the recurring multi-hour or even day-long regional network outages,

Having electronics fail is a somewhat common event. This is particularly true on circuits provided by the big carriers which tend to push the electronics to the max and keep equipment running to the last possible moment of its useful life. Anybody visiting a major telecom hub would likely be aghast at the age of some of the electronics still being used to transmit voice and data traffic.

I can recall two of my clients that have had similar experiences in the last few years. They had a leased circuit fail and then also saw the redundant path fail as well. In both cases, it turns out that the culprit was the provider of the leased circuits, which did not provide true redundancy. Although my clients had paid for redundancy, the carrier had sold them primary and backup circuits that shared some of the same electronics at ley points in the network – and when those key points failed their whole network went down.

However, what is unusual about the two big carrier outages is that the outages somehow cascaded into big regional outages. That was largely unheard of a decade ago. This reminds more of what we saw in the past in the power grid, when power outages in one town could cascade over large areas. The power companies have been trying to remedy this situation by breaking the power grid into smaller regional networks and putting in protection so that failures can’t overwhelm the interfaces between regional networks. In essence, the power companies have been trying to introduce some of the good lessons learned over time by the big telecom companies.

But it seems that the big telecom carriers are going in the opposite direction. I talked to several retired telecom network engineers and they all made the same guess about why we are seeing big regional outages. The telecom network used to be comprised of hundreds of regional hubs. Each hub had its own staff and operations and it was physically impossible for a problem from one hub to somehow take down a neighboring hub. The worst that would happen is that routes between hubs could go dark, but the problem never moved past the original hub.

The big telcos have all had huge numbers of layoffs over the last decade, and those purges have emptied out the big companies of the technicians that built and understood the networks. Meanwhile, the companies are trying to find efficiencies to get by with smaller staffing. It appears that the efficiencies that have been found are to introduce network solutions that cover large areas or even the whole nation. This means that the identical software and technicians are now being used to control giant swaths of the network. This homogenization and central control of a network means that failure in any one place in the network might cascade into a larger problem if the centralized software and/or technicians react improperly to a local outage. It’s likely that the big outages we’re starting to routinely see are caused by a combination of the  failure of people and software systems.

A few decades ago we somewhat regular power outages that affected multiple states. At the prodding of the government, the power companies undertook a nationwide effort to stop cascading outages, and in doing so they effectively emulated the old telecom network world. They ended the ability for an electric grid to automatically interface with neighboring grids and the last major power outage that wasn’t due to weather happened in the west in 2011.

I’ve seen absolutely no regulatory recognition of the major telecom outages we’ve been seeing. Without the FCC pushing the big telcos, it’s highly likely nothing will change. It’s frustrating to watch the telecom networks deteriorate at the same time that electric companies got together and fixed their issues.

Work-at-Home as a Product

Even before COVID-19, we were headed towards a future with more people working at home, at least part-time. I’ve seen estimates pre-COVID that as many as 10% of office workdays are done from home – that number has currently skyrocketed and it’s likely that working from home will never return to the old levels.

For working at home to be most effective, employees must have easy access to the same software and the same data as when they work in the office. Employers still have the same goals for data security and for protecting sensitive company data and customer data. Workers at home need to be protected from phishing, malware, and other attempts to gain access to customer data.

This all comes at a time when we’ve undergone a transition to security that is based upon building walls around sensitive data. Companies have made data more secure by restricting access to data from outside the company buildings. Twenty years ago it was common for companies to allow workers to dial-in to company servers, but over time those connections have proven to be the easiest path for hackers to gain access to company data. Companies have built data fortresses to protect data from external access, and suddenly, companies are being asked to poke holes in those walls to allow employees to gain access to company systems from home.

To complicate matters even further, in the last five years many mid-sized companies shed IT staff as they moved everything to the cloud. Many companies are not staffed or equipped to make the shift to allow working from home, meaning that opening up their networks to home-based employees has automatically opened new risks to hacking.

The question I ask today is if there is a broadband solution that smaller ISPs can offer to make it safer for companies to support employees working from home. The biggest carriers already have such solutions, at least for their largest corporate clients. For example, AT&T and Verizon have had products that allow for guaranteed secured data connections for corporate or government cell phones. Fortune 500 companies and the military have been able to buy similar products to provide for safe remote wireline broadband connections.

AT&T just announced a new product called AT&T Home Office Connectivity that will work on DSL, fiber, or AT&T wireless. The product essentially creates a carrier-class VPN between employees and a virtual gateway to connect to a company WAN. The AT&T solution makes the multitude of connections to employees in the AT&T cloud while only creating one path between AT&T and the company servers.

It’s still questionable if the big carriers can scale these kinds of products to meet the need of smaller corporations and local governments. The big intense security platforms are incredibly expensive and are out of price reach of the average business.

However, there is a real need for guaranteed safe connections between office and home. Companies have to find a way to trust that data exchanged with employees working outside the office is as safe as data moved around inside the business. I’m guessing the explosion of people working at home is going to result in some spectacular data breaches that will scare all of the companies that have sent employees home to work.

In addition to security, those working at home need easy solutions for all of the other routine functions performed at the office including things like spam filtering, and secure data backup and disaster recovery.

There are solutions available to solve at least some of these issues today, but again they are complicated for companies without a sizable IT staff. Some of the solutions include things like:

  • Cloud-based security software is a set of software and technologies that help companies meet regulatory compliance (like with the new California privacy laws) and that are designed to protect company and customer data in a wide variety of circumstances. This differs from traditional security software in that every transaction with the cloud can be assigned different levels of privacy and access to data. For example, this is the kind of software that allows customers to review their data and nobody else’s.
  • Microsegmentation is software that can create secure zones inside data centers and cloud deployments to enable companies to isolate different parts of their workload. For example, remote employees could be given access to more limited data than those working in the office, and everything they do remotely can be blocked from having any access to core servers.
  • Cloud SD-WAN is a technology that has been used for companies that operate multiple branches. Each remote employee can be treated as a separate branch of the business and be provided with an individual firewall and other standard security protocols.

Smaller ISPs ought to find some way to explore these kinds of products to offer to customers with remote workers. This is likely to be beyond the capability of most ISPs and might best be tackled by trade associations or other groups where ISPs collaborate.

This is a product that could be sold in large quantities today if it was ready as an off-the-shelf application that could be sold to an individual user. It’s unlikely the need for supporting working from home is going to go away, so ISPs ought to do what they’ve always done and find trustworthy solutions their customers need and want.

The Coming Year of Confusion

July 2020 Calendar

I’ve had a number of people ask me about how I think COVID-19 will impact the ISP industry over the next six months. It’s an interesting question to consider because there are both positive and negative trends that ISPs need to be concerned about. The chances are that these various trends will affect markets and ISPs differently – making it that much harder for an individual ISP to understand what they are going to see over the next six months. Following are some of the trends I think ISPs will need to deal with:

People Want Faster Broadband.  Many households came to the realization that their home broadband is inadequate when parents and students tried to work from home simultaneously. OpenVault reported that the number of households subscribing to gigabit service nearly doubled in the first quarter of this year. Clients are reporting an increased demand from first-time customers as well as customers wanting to upgrade to faster speeds.

Downturn in Small Businesses. Everything seems to indicate that a lot of small and medium businesses are not going to survive the pandemic. There have already been a number of businesses like restaurants and small retail stores that have gone under. The anchor stores at malls are failing right and left. There seems to be an expectation that travel-related businesses are going to take a long time to come back. Everything I read says that there is a coming crisis in the fall for business landlords when the finally digest that business tenants are either disappearing or will want to negotiate cheaper rent. That’s likely to have a secondary ripple effect as strip malls and other business landlords start declaring bankruptcy. Over time, new businesses will grow to fill many of the voids, but there has been a huge shift to shopping online that will likely not retreat to pre-COVID levels.

People Will Continue to Work from Home. Every day I read about businesses that say that working from home, at least part time, will become the new normal in many industries. The latest was a survey of law firms that said that a lot of lawyers are not going to return to the office full time when the pandemic is over. This is good news for ISPs that provide residential broadband, because people working from home are going to demand speeds and latency that will support their work. OpenVault just reported that as of the end of the first quarter of 2020 that the percentage of homes subscribing to gigabit broadband doubled over the last year and is now at 3.75% of all homes and growing rapidly. This is not such great news for ISPs that serve law offices.

The Big Unknown is the Impact of Unemployment. As businesses fail or downsize a lot of people are not going to be returning to their original job. ISPs are already reporting that people are ditching telecom products like landlines. The cord cutting in the last month of the first quarter of this year was record-setting. The big unknown will be the number of households that can no longer afford to buy landline broadband. Obviously, unemployment isn’t going to stay at the current 40 million people, but it’s not quickly going to return to pre-COVID levels. A secondary impact of a degraded economy will be a surge in bad debt as customers hang onto to home broadband as long as they can. We’re likely to see a big impact when the Keep America Connected pledge ends. If ISPs present a bill for multiple back months of billing we ought to see a lot of customers forced to default and cancel broadband.

The Pandemic is the Dagger That Will Finally Kill DSL. Homes that have an option of using DSL or something faster like cable broadband or fiber are going to be bailing on DSL in big numbers. Many people in towns have stuck with DSL because it is priced cheaper than cable broadband. However, for a lot of homes, the most important factor in broadband has become speed and performance.

The Rural Broadband Gap Will Keep Getting Headlines. COVID-19 made it clear to elected officials at all levels of government that the rural broadband gap is badly hurting the economy. Even if schools return to normal, businesses in rural areas are not going to have the same flexibility to send employees home, and unemployed people in rural area are not going to easily be able to accept at-home jobs. That’s going to keep a sizable slice of the economy from fully participating in any recovery. Almost everybody I talk to is hopeful that this might translate to increased grant money for rural broadband – but that’s no guarantee.

We’re Going to Have Unexpected Shortages in the Supply Chain. The best way to describe the supply chain right now is spotty. Manufacturers of telecom electronics are going to suddenly find they can’t buy one or two components, and manufacturing will come to unexpected halts. Anybody building a broadband network needs to expect delays, and if history is a good teacher, the delays will last longer than expected. This is going to play havoc with anybody that has financed a new network and needs to install customers to meet debt payments.

Banks Are Going to Tighten Lending. It’s inevitable that as banks digest bad loans from failing businesses that they are going to get more cautious about making new loans. Even if interest rates don’t rise, banks will do what they always do under stress and get more conservative. Some local banks are likely to get into real trouble and will fail if their portfolio was heavily invested  into businesses that are failing.

This all makes for an interesting short-term future. There will be more people yelling for faster broadband at the same time there will be more customers unable to afford broadband. There will be grants awarded for rural markets at a time when banks might not provide the matching funds. All in all, it’s going to be a mess for most ISPs who will see both good and bad things affecting them at the same time.

 

 

 

 

Verizon’s Network Performance

Verizon has been posting a weekly report of how COVID-19 has been impacting their network. The weekly blogs are rather short on facts and it’s clear that the intent of this weekly report is to put investors at ease that the company’s networks are coping with the burst of traffic that has come as a result of the pandemic. With that said, the facts that are discussed are interesting.

Verizon lead off the weekly entry for 5/21 saying that voice and text traffic are starting to return to pre-COVID levels. On the most recent Monday Verizon saw 776 million voice calls, down from 860 million calls at the peak of COVID-19. That falls under the category of interesting fact, but heavier telephone call volumes are not the cause of undue stress on the Verizon network. Telephone calls use tiny amounts of broadband – 64 kbps. Thirty telephone calls will fit into the same-size data path as one Netflix stream. Additionally, once voice calls reach a Verizon hub, telephone calls are routed using a separate public switched telephone network PSTN to transport calls across the country. Text messages use much less data than a telephone call and are barely noticed on telco networks.

The bigger news is that some other traffic is staying at elevated levels. Verizon reported for 5/21 that gaming is still up 82% over pre-pandemic levels and VPN connections used to connect to school and work servers are up 72%. The use of collaborative tools like Zoom and Go-to-Meeting are up ten times over pre-COVID levels (1,000%).

One of the more interesting statistics is that network mobility (people driving or walking and switching between cell towers) has increased in recent weeks and that one-third of states now have higher levels of mobility than pre-COVID. At first that’s a little hard to believe until you realize that in pre-COVID time students and employees were largely stationary at the school or office much of the day – any roaming by stay-at-home people is an increase.

Reading back through the weekly statistics shows that most web activities are at higher levels than pre-COVID. Fir example, in the 4/22 report the volumes of downloading, gaming, video usage, VPNs, and overall web traffic were higher than normal, with the only decrease being the volumes used for social media.

What none of these reports talk about is the stress put on the Verizon networks. It’s easy in reading these reports to forget that Verizon wears many hats and operates many networks. They are still a regulated telco in the northeast and still have a lot of telephone customers. That also means they still operate a sizable DSL network. The company, through Verizon FiOS is still the largest fiber-to-the-home provider. The company also owns and extensive enterprise and long-haul fiber network. Verizon also operates one of the largest cellular networks in the world.

When Verizon says all is well, they can’t mean that for each of these networks. The web is full right now of complaints from DSL customers (Verizon’s and other big telcos) complaining how inadequate DSL is for working at home. The Verizon DSL network was already overstressed in evenings and has to be near the point of collapse due to the big increases in VPNs and collaboration connections. Any Verizon DSL customer reading this Verizon blog that says everything is fine is probably spitting fire.

By contrast, Verizon’s FiOS networks are likely handing the pandemic traffic with ease. Verizon FTTH products have offered symmetrical data for years, with the upload data path was lightly utilized. The big uptick in VPN connections and collaboration connections ought to be handled well in that network. Any glitches might come from older FiOS neighborhoods where the backhaul oaths out of neighborhoods are too small.

What Verizon or AT&T haven’t talked about is the different impact on their various networks. For example, what’s the overall change in data usage on their cellular networks compared to other networks? The big telcos have been moot on this kind of detail, because admitting that some of the networks are handing the pandemic well might lead to an admission that other parts of the company are not doing so well. Instead we get the very generic story of how everything is fine with the company and their networks.

These companies probably do not have any obligations to report about their various networks in detail. Verizon DSL customers don’t need company pronouncements to know that their broadband experience has nearly collapsed since the pandemic. FiOS customers are likely happy that their broadband has weathered the storm. One of these days I’ll hopefully have a beer with some Verizon engineer who can tell me what really happened – both good and bad – behind the scenes.

Easing Fiber Construction

Almost every community wants fiber broadband, but I’ve found that there are still a lot of communities that have ordinances or processes in place that add cost and time to somebody trying to build fiber. One of the tasks I always ask cities to undertake is to do an internal review of all of the processes that apply to somebody who wants to build fiber, to identify areas that an ISP will find troublesome. Such a review might look at the following:

Permitting. Most cities have permitting rules to stop companies from digging up the streets at random, and ISPs expect to have to file permits to dig under streets or to get onto city-owned utility poles. However, we’ve run into permitting issues that were a major hindrance to building fiber.

  • One of my clients wanted to hang fiber on city-owned poles and found out that the city required a separate permit for each pole. The paperwork involved with that would have been staggering.
  • We worked in another city where the City wanted a $5,000 non-refundable fee for each new entity wanting to do business in the city. Nobody at the City could recall why the fee was so high and speculated that it was to help deter somebody in the past that they didn’t want working in the city.
  • I’ve seen a number of cities that wanted a full set of engineering drawings for the work to be done and expected no deviance from the plans. Very few ISPs do that level of engineering up front and instead have engineers working in front of construction crews to make the final calls on facility placement as the project is constructed.

Rights-of-Way. Cities and counties own the public rights-of-way on the roads under their control. Most cities want fiber badly enough to provide rights-of-way to somebody that is going to build fiber. But we’ve seen cities that have imposed big fees on getting rights-of-way or who want sizable annual payments for the continued use of the rights-of-way.

I’ve seen fiber overbuilders bypass towns that overvalue the rights-of-way. Many cities are desperate for tax revenues and assume anybody building fiber can afford high up-front fees or an ongoing assessment. These cities fail to realize that most fiber business plans have slim margins and that high fees might be enough to convince an ISP to build somewhere else.

Work Rules. These are rules imposed by a city that require work to be done in a certain way. For example, we’ve seen fiber projects in small towns that required flagmen to always be present even though the residential streets didn’t see more than a few cars in an afternoon. That’s a lot of extra cost added to the construction cost that most builders would view as unnecessary.

We’ve seen some squirrelly rules for work hours. Many cities don’t allow work on Saturdays, but most work crews prefer to work 6-day weeks. We’ve seen work hours condensed on school days that only allow construction during the hours that school is in session, such as 9:00 to 2:00. Anybody who has set up and torn down a boring rig knows that this kind of schedule will cut the daily feet of boring in half.

Timeliness.  It’s not unusual for cities to be slow for tasks that involve City staff. For example, if a City does their own locates for buried utilities it’s vital that they perform locates on a timely basis so as to not idle work crews. In the most extreme case, I’ve seen locate put on hold while the person doing the locates went on a long vacation.

We’ve also seen cities that are slow on inspecting sites after construction. Fiber work crews move out of a neighborhood or out of the town when construction is complete, and cities need to inspect the roads and poles while the crews are still in the market.

In many cases, the work practices in place in a city are not the result of an ordinance but were created over time in reaction to some past behavior of other utilities. In other cases, some of the worst practices are captured in ordinances that likely came about when some utility really annoyed the elected officials in the past. A city shouldn’t roll over and relax all rules for a fiber builder because such changes will be noticed by the other utilities that are going to want the same treatment. But cities need to eliminate rules that add unnecessary cost to bringing fiber.

We always caution ISPs to not assume that construction rules in a given community will be what is normally expected. It’s always a good idea to have a discussion with a city about all of the various rules long before the fiber work crews show up.