Broadband Interference

Jon Brodkin of ArsTechnica published an amusing story about how the DSL went out of service in a 400-resident village in Wales each morning at 7:00 am. It turns out that one of the residents turned on an ancient television that interfered with the DSL signal to the extent that the network collapsed. The ISP finally figured this out by looking around the village in the morning with a spectrum analyzer until they found the source of the interference.

It’s easy to think that the story points out another weakness of old DSL technology, but interference can be a problem for a lot of other technologies.

This same problem is common on cable company hybrid-fiber coaxial networks. The easiest way to understand this is to think back to the old days when we all watched analog TV. Anybody who watched programming on channels 2 through 5 remembers times when the channels got fuzzy or even became unwatchable. It turns out that there are a lot of different devices that interfere with the frequencies used for these channels including things like microwave ovens, certain motors like power tools and lawnmowers, and other devices like blenders. It was a common household occurrence for one of these channels to go fuzzy when somebody in the house, or even in a neighboring home used one of these devices.

This same interference carries forward into cable TV networks. Cable companies originally used the same frequencies for TV channels inside the coaxial wires that were used over the air and the low TV channels sat between the 5 MHz and 42 MHz frequency. It turns out that long stretches of coaxial wires on poles act as a great antenna, so cable systems pick up the same kinds of interference that happens in homes. It was pretty routine for channels 2 and 3, in particular, to be fuzzy in an analog cable network.

You’d think that this interference might have gone away when cable companies converted TV signals to digital. The TV transmissions for channels 2 through 5 got crystal clear because cable companies relocated the digital version of these channels to better frequency. When broadband was added to cable systems the cable companies continue to use the low frequencies. CableLabs elected to use these frequencies for the upload portion of broadband. There is still plenty of interference in cable networks today – probably even more than years ago as coaxial networks have aged and have more points for interference to seep into the wires. Until the pandemic, we didn’t care much about upload bandwidth, but it turns out that one of the major reasons that cable companies struggle to deliver reliable upload speeds is that they are using the noisiest spectrum for the upload function.

The DSL in the village suffered from the same issue since the telephone copper wires also act as a big outdoor antenna. In this village, the frequency emanating from the old TV exactly matched the frequencies used for DSL.

Another common kind of interference is seen in fixed wireless networks in a situation where there are multiple ISPs using the same frequencies in a given rural footprint. I know of counties where there are as many as five or six different wireless ISPs, and most use the same frequencies since most WISPs rely on a handful of channels in the traditional WiFi bandwidth at 2.4 MHz and 5 MHz. I’ve heard of situations where WiFi is so crowded that the performance of all WISPs suffer.

WiFi also suffers from local interference in the home. The WiFi standard says that all devices have an equal chance of using the frequencies. This means that a home WiFi router will cycle through all the signals from all devices trying to make a WiFi connection. When a WiFi router connects with an authorized device inside the home it allows for a burst of data, but then the router disconnects that signal and tries the next signal – cycling through all of the possible sources of WiFi.

This is the same issue that is seen by people using WiFi in a high-rise apartment building or a hotel where many users are trying to connect to WiFi at the same time. Luckily this problem ought to improve. The FCC has authorized the use of 6 GHz spectrum for home broadband which opens up numerous new channels. Interference will only occur between devices trying to share a channel, but that will be far fewer cases of interference than today.

The technology that has no such interference is fiber. Nothing interferes with the light signal between a fiber hub and a customer. However, once customers connect the broadband signal to their home WiFi network, the same interference issues arise. I looked recently and can see over twenty other home WiFi networks from my office – a setup ripe for interference. Before making too much fun of the folks in the Welsh village, there is a good chance that you are subject to significant interference in your home broadband today.

Comcast Offers New Work-from-home Product

The pandemic has forced millions of people to work from home. This instantly caused heartburn for the IT departments of large corporations because remote workers create new security vulnerabilities and open companies to cyberattacks and hacking. Big companies have spent the last decade moving data behind firewalls and suddenly are being asked to let thousands of employees pierce the many layers of protection against outside threats.

Comcast announced a new product that will alleviate many of the corporate IT concerns. Comcast, along with Aruba has created the Comcast Business Teleworker VPN product. This product creates a secure VPN at an employee’s home and transports the VPNs for all remote workers to a remote datacenter where corporate IT can then deal with all remote workers in one place.  This isolates the worker connections from the corporate firewalls and employees instead deal with copies of corporate software that sit in a datacenter.

There is a perceived long-term need for the product since as many as 70% of companies say that they are likely to continue with the work-from-home model after the end of the pandemic. Working from home is now going to be a routine component of corporate life.

At the home end, the Comcast product promises to not interfere with existing home broadband. The only way for Comcast to do this is to establish a second data stream from a house using a separate cable modem (or utilizing modems that can establish more than one simultaneous connection). This is an important aspect of the product because one of the biggest complaints about working from home is that many homes have problems accommodating more than one or two workers or students at the same time. This new product would be ill-received by workers if implementing it means less bandwidth for everybody else in the home.

By routing all remote employees to a common hub, Comcast will enable corporate IT staff to mimic the work computing environment for remote workers. Many companies are currently giving remote employees limited access to core software systems and data, but this arrangement effectively establishes the Comcast hub as a secure node on the office network.

This is something that any ISP with a fiber network should consider mimicking. An open-access network on fiber already does this same thing today. An open-access network creates a VPN at each customer of a given ISP and then aggregates the signals, untouched, to deliver to the ISP. On a fiber network, this function can be done by fairly simple routing.  Fiber ISPs can also provide the home working path separate from the consumer path by either carving out a VPN or else providing a second data path – something most fiber ONTs already allow.

Comcast has taken the extra step of partnering with Aruba to enable a corporation to establish a virtual corporate data center at a remote site. But fiber ISPs don’t have to be that complicated and rather than offering this to only large corporate clients, a fiber network could deliver a secure path between home and office for a business with only a few remote employees.

This could even be provided to sole proprietors and could safely link home and office on a VPN.  That allows for the marketing of a ‘safe office’ connection for businesses of any size and would provide the average small business a much more secure connection between home and office than they have today.

Every fiber provider that serves both residential communities and business districts ought to develop some version of this product by year-end. If working from home is a new reality, then fiber-based ISPs ought to be catering to that market using the inherent robustness and safety of a fiber network to create and route VPNs over the local fiber network.

You Can’t Force Innovation

The new video service Quibi failed after only 7 months of operation and after having received $2 billion in backing from big industry players. The concept was to offer short 5 to 7-minute video serials that would get viewers engaged in a story from day-to-day and week-to-week. The failure seems to be due to nobody being interested in the format. Younger viewers aren’t interested in scripted Hollywood content and instead watch content created by their peers. Older people have now been trained to binge-watch. It turns out there no audience for the concept of short cliff-hanger videos.

The Quibi failure reminded me that you can’t force innovations onto the public. We live in a society where everything new is hyped beyond belief. New technologies and innovations are not just seen as good, but in the hype-world are seen as game changers that will transform society.  A few innovations live up to the hype, such as the smartphone. But many other highly-hyped innovations have been a bust.

Consider bitcoin. This was a new form of currency that was going to replace government-backed currency. But the public never bought into the concept for one big fundamental reason – there is nothing broken about our current form of money. We deposit our money in banks, and it sits there safely until we’re ready to use it. For all of the endless hype about how bitcoin would change the world, I never heard a good argument about why bitcoin is better than our current banking system – except maybe for criminals and dictators that want to hide wealth.

Another big bust was Google Glass. People were not ready to engage with somebody in public who could film them and replay a casual conversation later or post it on social media. People were even more creeped out by the stalker aspect of men using facial recognition to identify and stalk women. To give credit to Google, the folks there never envisioned this as a technology for everybody, but the Internet hype machine played up the idea beyond belief. The public reaction to the technology was a resounding no.

Google was involved in another project that hit a brick wall. Sidewalk Lab, a division of Alphabet envisioned a new smart city being created on the lakefront in Toronto. To tech folks, this sounded great. The city would be completely green and self-contained. Robots would take care of everything like emptying trashcans when they are full, to setting up picnics in the park and cleaning up afterwards. Traffic was all underground and an army of robots and drones would deliver everything people wanted to their doorstep. But before this even got off the drawing board, the people of Toronto rejected the idea as too big-brotherish. The same computer systems that catered to resident demands would also watch people at all times and record and categorize everything they do. In the end, privacy won out over technology.

Some technologies are hyped but never materialize. Self-driving cars have been touted as a transformational technology for over a decade. But in the last few years, the engineers working on the technology acknowledge that a fully self-sufficient self-driving car is still many years away. But this doesn’t stop the hype and there are still articles about the promise of self-driving cars in the press every month.

Nothing has been hyped more in my lifetime than 5G. In the course of recently watching a single football game, I must have seen almost a dozen 5G commercials. Now that 5G phones are hitting the market, the new technology is likely going to soon be perceived by the public as a bust. The technology is being painted as something amazing and new, but recent tests show that 5G is no faster than 4G in 21 of 23 cities. 5G will eventually be faster and better, but will today’s hype make it hard for the cell companies to explain when 5G is actually here?

I could continue to list examples. For example, if I had believed the hype, I’d now live in a fully-automated home where I could talk to my home and have it cater to my every whim. I’d have unlimited power from a cheap neighborhood fusion power plant that produces unlimited and clean power fueled by water. I’d be able to avoid a commute by using my flying car. There is much to like in the hype-world, but sadly it’s not coming any time soon.

Pricing Strategies

One of the things that new ISPs always struggle with is pricing, and I’m often asked advice on the right pricing strategy. It’s not an easy answer and in working across the country I see a huge range of different pricing strategies. It’s really interesting to see so many different ideas on how to sell residential broadband service, which is fundamentally the same product when it’s offered on a fiber network. The following are some of the most common pricing strategies:

High, Low, or Market Rates? The hardest decision is where to set rates in general. Some ISPs are convinced that they need low rates to beat the competition. Others set high rates since they only want to sell products with high margins. Most ISPs set rates close to the market rates of the competitors. I sat at a bar once with a few ISPs who argued this for hours – in the end, the beer won.

One Broadband Product. A few ISPs like Google Fiber, Ting, and a handful of smaller ISPs offer only a single broadband product – a symmetrical gigabit connection. Google Fiber tried going to a 2-product tier but announced this year that they’ve returned to the flat-rate $70 gigabit. The downside to this approach is that it shuts out households that can’t afford the price. The upside is that every customer has a high margin.

Simple Tiers. The most common pricing structure I see offers several tiers of prices. An ISP might have three-tier offerings at $55, $70, and $90, ranging from 100 Mbps to gigabit. Generally, such prices have no gimmicks – no introductory pricing, term discounts, or bundling. There are still ISPs with half a dozen, or even more tiers this would confuse me as a customer. For example, I don’t know how a customer would be able to choose between buying 75 Mbps, 100 Mbps, and 125 Mbps.

ISPs with this philosophy differ most by the gap between pricing tiers. Products could be priced $10 apart of $30 apart, and that makes a significant statement to customers. Small steps between tiers invite customers to upgrade, while bigger steps between tiers make a statement about the value of the faster speeds.

Low Basic Price. I’ve seen a number of ISPs that have a low-price basic broadband product, but otherwise somewhat normal tiers of pricing. This is done more often by municipal ISPs trying to make broadband affordable to more homes, but there are commercial ISPs with the same philosophy. As an example, an ISP might have an introductory tier of 25 Mbps for $40. This pricing strategy has always bothered me. This can be a dangerous product to offer because the low price might attract a lot of customers who would otherwise pay more. I’ve always thought that it makes more sense to offer a low-income product only to homes that qualify in some manner but give them real broadband.

Introductory Marketing Rate. Some ISPs set a low introductory rate for first-time customers. These rates are generally good for one or two years and customers routinely sign contracts to get the low rates. The long-term downside of this pricing philosophy is that customers come to expect low rates. Customers that take the introductory rate will inevitably try to renegotiate for continued low rates at the end of the contract period.

An ISP with this pricing structure is conveying some poor messages. First, they are telling customers that their rates are negotiable. They are also conveying the message that there is a lot of profits in their normal rates and they are willing to sell for less. Customers dislike the introductory rate process because they invariably get socked with an unexpected rate increase when rates jump back to list prices. The time of introductory discounts might be coming to an end. Verizon recently abandoned the special pricing strategy because it attracts low-margin customers that often leave at the end if the contract period.

Bundling. This is a pricing strategy to give a discount for buying multiple services and has been the bread and butter for the big cable companies. Bundling is making less sense in today’s market where there is little or no margin in cable TV. Most small ISPs don’t bundle and take the attitude that their list prices are a good deal – much the same as car dealers who no longer haggle over prices. In order to bundle, an ISP has to set rates high – and many ISPs prefer to instead to set fair rates and not bother with the bundle.

The Working-from-home Migration

Upwork, a platform that supports freelancers conducted a major survey of more than 20,000 adults to look at the new phenomenon of people moving due to the pandemic, with questions also aimed at understanding the motivation for moving. Since Upwork supports people who largely work out of their homes, the survey concentrated on that issue.

What the survey verified what is already being covered widely by the press – people are moving due to the pandemic in large numbers. The survey found that the rate of migration is currently three to four times higher than the normal rate from recent years.

The key findings from the survey are as follows:

  • Between 6.9% and 11.5% of all households are considering moving due to the ability to work remotely. That equates to between 14 and 23 million people. It’s a pretty wide range of results, but likely a lot of people that want to move will end up not moving.
  • 53% of people are moving to find housing that is significantly less expensive than their current home.
  • 54% of people are moving beyond commuting distance and are moving more than a two-hour drive away from their current job.
  • People are moving from large and medium cities to places with lower housing density.

These findings are corroborated by a lot of other evidence. For example, data from Apartments.com show that rental occupancy and rates in cities are falling in the most expensive markets compared to the rest of the country. Realtors in smaller markets across the country are reporting a boom of new residents moving into communities.

Economic disruption often causes big changes in population migration and we saw spikes in people moving during the last two economic downturns. In those cases, there was a big shift in people moving from rural areas to cities and in people moving from the north to the south to follow job opportunities.

Interestingly, this new migration might reverse some of those past trends. Many rural communities have been losing population over the last few decades and the new migration patterns might reverse some of that long-term trend. People have been leaving rural parts of states to get jobs in urban centers and working from home is going to let many of these same people move back to be closer to families.

Of course, one of the issues that a lot of folks moving away from cities are going to face is that the broadband is often not as good where they want to move. The big cable companies have better networks in big cities than in smaller markets. You don’t have to move far outside of suburbs or rural county seats to find homes with little or no broadband. Even cellular coverage is a lot spottier outside of cities. I’ve seen local newspaper stories from all over the country of people who have bought rural homes only to find out that there was no broadband available.

But this isn’t true everywhere. There are some smaller towns with fiber to every home. There are rural areas with fiber to the farms. Rural communities that have fiber ought to be advertising it far and wide right now.

As a thought experiment, I looked at the states around me to see if I could identify areas that have fiber. The search was a lot harder than I thought it should be. States ought to have an easy-to-find map showing the availability of fiber because those communities are going to move to the top of the list for people who want a rural setting and who will be working from home.

I’ve worked from home for twenty years and I’m happy to see this opportunity open for millions of others. It gives you the freedom to live where you want and to choose where to live for reasons other than a job. It’s going to be an interesting decade ahead if people can move to where they want to live. I just have to warn local elected officials that new people moving to your community are going to be vocal about having great broadband.

Can the FCC Regulate Facebook?

At the urging of FCC Chairman Ajit Pai, the FCC General Counsel Tom Johnson announced in a recent blog that he believes that the FCC has the authority to redefine the immunity shield provided by Section 230 of the FCC’s rules that comes from the Communications Decency Act from 1996.

Section 230 of the FCC rules is one of the clearest and simplest rules in the FCC code:  “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider“.

In non-legalese, this means that a web companies is not liable for third-party content posted on its platform. It is this rule that enables public comments on the web. All social media consists of third-party content. Sites like Yelp and Amazon thrive because of public post reviews of restaurants and products. Third-party comments are in a lot more places on the web such as the comment section of your local newspaper, or even here on my blog.

Section 230 is essential if we are going to give the public a voice on the web. Without Section 230 protections, Facebook could be sued by somebody who doesn’t like specific content posted on the platform. That’s dangerous because there is somebody who hates every possible political position.  If Facebook can be sued for content posted by its billions of users, then the platform will have to quickly fold – there is no viable business model that can sustain the defense of huge volumes of lawsuits.

Section 230 was created when web platforms started to allow comments from the general public. The biggest early legal challenge to web content came in 1995 when Wall Street firm Stratton Oakmont sued Prodigy over a posting on the platform by a user that accused the president of Stratton Oakmont of fraud. Stratton Oakmont won the case when the New York Supreme Court ruled that Prodigy was a publisher because the platform exercised some editorial control by moderating content and because Prodigy had a clearly stated set of rules about what was allowable content on the Prodigy platform. As might be imagined, this court case had a chilling impact on the burgeoning web industry, and fledgling web platforms worried about getting sued over content posted by the public. This prompted Representatives Rob Wyden and Chris Cox to sponsor the bill that became the current Section 230 protections.

Tom Johnson believes the FCC has the authority to interpret Section 230 due to Section 201(b) of the Communications Act of 1934, which confers on the FCC the power to issue rules necessary to carry out the provisions of the Act. He says that when Congress instructed that Section 230 rules be added to FCC code, that implicitly means the FCC has the authority to interpret the rules.

But then Mr. Johnson does an interesting tap dance. He distinguishes between interpreting the Section 230 rules and regulating companies that are protected by these rules. If the FCC ever acts to somehow modify Section 230, the legal arguments will concentrate on this nuance.

The FCC has basically been authorized by Congress to regulate common carriers of telecommunications services as well as a few other responsibilities specifically assigned to the agency.

There is no possible way that the FCC could ever claim that companies like Facebook or Google are common carriers. If they can’t make that argument, then the agency likely has no authority to impose any obligations on these companies, even should it have the authority to ‘interpret’ Section 230. Any such interpretation would be meaningless if the FCC has no authority to impose such interpretations on the companies that rely on Section 230 protections.

What is ironic about this effort by the FCC is that the current FCC spent a great deal of effort to declassify ISPs from being common carriers. The agency has gone as far as possible to wipe its hands of any responsibility for regulating broadband provided by companies like AT&T and Comcast. It will require an amazing set of verbal gymnastics to somehow claim the ability to extend FCC authority to companies like Facebook and Twitter, which clearly have zero characteristics of being a common carrier while at the same time claiming that ISPs are not common carriers.

The Aftermath of Natural Disasters

The never-ending hurricane season in Louisiana this year is a reminder that fiber network owners should have disaster recovery plans in place before they are hit with unexpected major network damages and outages.

The magnitude of the storm damages in Louisiana this year is hard for the mind to grasp. Entergy, the largest electric company in the area reported that the latest hurricane Laura took out 219 electric transmission lines and 1,108 miles of wiring. The storm damaged 9,760 poles, 3,728 transformers, and 18,706 spans of wires. And Entergy is not the only electric company serving the storm-damaged area. To make matters worse, the utility companies in the area were still in the process of repairing damage from the two earlier hurricanes.

Hurricanes aren’t the only natural disaster that can damage networks. The recent fires in the northwest saw large numbers of utility poles burnt and miles of fiber melted. The town of Ruston, Louisiana saw hurricane damage this year after having massive damage last year from a major tornado.

How does the owner of a fiber network prepare for major damage? Nobody can be truly prepared for the kind of damage cited above by Entergy, but there are specific steps that should be taken long before damage hits.

One of the first steps is to have a disaster plan in place. This involves identifying ahead of time all of the first steps that should be taken when a disaster hits. This means knowing exactly who to call for help. It means having at least a minimal amount of key spare components on hand, and knowing where to find what’s needed in a hurry. It involves having plans for how to get a message out to affected customers during the emergency.

Probably the best step to take is to join a mutual aid group. This is a group of other similar network owners that agree to send repair teams after a disaster strikes. For the kind of damage caused by the hurricanes this year, hundreds of additional work crews are needed to tackle the repairs. Every utility industry has such groups. For example, the American Public Power Association has a Mutual Aid Network. This group mobilizes crews from member utilities and rushes them to the affected area, as needed. Any company joining these groups must realize that they will be asked to send crews when other group members are hit by disasters.

These mutual aid groups are a lifesaver. They not only gather the needed workforce required to fix disaster damages, but they help to coordinate the logistics of housing and feeding crews and of locating the raw materials – fiber and poles, needed to repair damages.

There is also a money side of disasters to deal with. Much of the funding to repair major storm damage comes from FEMA as funds are authorized when governors declare states of emergency. There is a huge pile of paperwork needed to claim disaster funding and there are specialized consulting firms that can help with the efforts.

There was a time when electric networks and fiber networks were separate entities, but today electric companies all utilize fiber networks as a key component for operating the electric grid. When repairing downed electric lines, it’s now mandatory to also reconnect the fiber networks that allow electric substations to function. This means that crews of fiber splicers are needed alongside electric utility technicians.

The massive damages seen this year ought to be a reminder for anybody that operates a large network to have a disaster recovery plan. I know fiber overbuilders who have never considered this, and perhaps this year will prompt them to get ready – because you never know where the next disaster will hit.

FCC Expands Rural Use of White Space Spectrum

At the October monthly meeting, the FCC modified its Part 15 rules to allow for better utilization of white space spectrum in rural America – a move that should provide a boon to fixed wireless technology. The term ‘white space’ refers to spectrum that has been assigned for over-the-air television broadcasting but that sits empty in and is not being used by a television station. In any given market there are channels of television spectrum that are not being used, and today’s ruling describes new ways that wireless ISPs, school systems, and others can better use the unused spectrum.

The FCC action follows a long-standing petition from Microsoft asking for better use of unused white space spectrum. The FCC asked Microsoft and the National Association of Broadcasters to negotiate a reasonable plan for using idle spectrum, and the actions taken by the agency reflect the cooperation of the parties. The FCC further plans to issue a Notice for Proposed Rulemaking to investigate other questions related to white space spectrum.

First, the FCC is allowing for increased height for white space transmitters. The transmitters were previously limited to being no more than 250 meters above the average terrain in an area, and that has been boosted to 500 meters. In case somebody is envisioning 1,500-foot towers, wireless companies achieve this height when placing towers on hilltops. The extra height is important for two reasons. Fixed wireless technology requires line-of-sight between the tower and a customer location, and the higher the tower the better chance of being able to ‘see’ some portion of a customer premise. Using higher towers also means that wireless signal can travel farther – white space spectrum is unique compared to many other spectrum bands in that it can deliver some broadband at significant distances from a tower.

The FCC order also is allowing increased power and has increased the maximum effective radiated power from 10 watts to 16 watts. Power levels are important because the strength of the signal matters at the customer location – higher power means a better chance of delivering full broadband speeds.

The order builds in some additional protection for existing television stations. The FCC order increases the separation between an ISP wireless signal and existing television station frequencies. Transmissions with white space spectrum tend to stray out of band and allowing broadband signals too close to television signals would mean degraded performance for both the television station and ISP. One of the questions to be asked by the NPRM is if there is a way to utilize the bands closer to existing television signals.

The FCC’s order also authorized the use of narrowband devices that use white space. This opens up the door to using white space spectrum to communicate with Internet of Things devices. In rural areas, this might be a great way to communicate with agricultural sensors since the white space spectrum can travel to the horizon.

Finally, the order allows for higher power applications in isolated geographic areas that can be ‘geo-fenced’, meaning that the transmissions can be done in such a way as to keep the signals isolated to a defined area. The envisioned uses for this kind of application would be to provide broadband along school bus routes or to provide coverage of defined farm fields.

These changes were a long time in coming, with Microsoft asking for some of these changes since 2008. The issues have been bouncing around the FCC for years and it finally took the compromise between the parties to make this work. Maybe some of the other parties arguing over spectrum allocation could learn from this example that cooperation beats years of regulatory opposition.

The Upload Speed Lie

In the 2020 Broadband Deployment Report, the FCC made the following claim. “The vast majority of Americans – surpassing 85% – now have access to fixed terrestrial broadband service at 250/25 Mbps”. The FCC makes this claim based upon the data provided to it by the country’s ISPs on Form 477. We know the data reported by the ISPs is badly flawed in the over-reporting of download speeds, but we’ve paid little attention to the second number the FCC cites – the 25 Mbps upload speeds that are supposedly available to everybody. I think the FCC claim that 85% of homes have access to 25 Mbps upload speeds is massively overstated.

The vast majority of the customers covered by the FCC statement are served by cable companies using hybrid fiber-coaxial technology. I don’t believe that cable companies are widely delivering upload speeds greater than 25 Mbps upload. I think the FCC has the story partly right. I think cable companies tell customers that the broadband products they buy have upload speeds of 25 Mbps, and the cable company’s largely report these marketing speeds on Form 477.

But do cable companies really deliver 25 Mbps upload speeds? One of the services my consulting firm provides is helping communities conduct speed tests. We’ve done speed tests in cities recently where only a tiny fraction of customers measured upload speeds greater than 25 Mbps on a cable HFC network.

It’s fairly easy to understand the upload speed capacity of a cable system. The first thing to understand is the upload capacity based upon the way the technology is deployed. Most cable systems deploy upload broadband using the frequencies on the cable system between 5 MHz and 42 MHz. This is a relatively small amount of bandwidth that sits at the noisiest part of cable TV frequency. I remember back to the days of analog broadcast TV and analog cable systems when somebody running a blender or a microwave would disrupt the signals on channels 2 through 5 – the cable companies are now using these same frequencies for uploading broadband. The DOCSIS 3.0 specification assigned upload broadband to the worst part of the spectrum because before the pandemic almost nobody cared about upload broadband speeds.

The second factor affecting upload speeds is the nature of the upload requests from customers. Before the pandemic, the upload link was mostly used to send out attachments to emails or backup data on a computer into the cloud. These are largely temporary uses of the upload link and are also considered non-critical – it didn’t matter to most folks if a file was uploaded in ten seconds or five minutes. However, during the pandemic, all of the new uses for uploading require a steady and dedicated upload data stream. People now are using the upload link to connect to school servers, to connect to work servers, to take college classes online, and to sit on video call services like Zoom. These are critical applications – if the upload broadband is not steady and sufficient the user loses the connection. The new upload applications can’t tolerate best effort – a connection to a school server either works or it doesn’t.

The final big factor that affects the bandwidth on a cable network is demand. Before the pandemic, a user had a better chance than today of hitting 25 Mbps upload because they might have been one of a few people trying to upload at any given time. But today a lot of homes are trying to make upload connections at the same time. This matters because a cable system shares bandwidth both in the home, but also in the neighborhood.

The upload link from a home can get overloaded if more than one person tries to connect to the upload link at the same time. Homes with a poor upload connection will find that a second or a third user cannot establish a connection. The same thing happens at the neighborhood level – if too many homes in a given neighborhood are trying to connect to upload links, then the bandwidth for the whole neighborhood becomes overloaded and starts to fail. Remember a decade ago that it was common for downloaded videos streams to freeze or pixelate in the evening when a lot of homes were using broadband? The cable companies have largely solved the download problem, but now we’re seeing neighborhoods overloading on upload speeds. This results in people unable to establish a connection to a work server or being booted off a Zoom call.

The net result of the overloaded upload links is that the cable companies cannot deliver 25 Mbps to most homes during the times when people are busy on the upload links. The cable companies have ways to fix this – but most fixes mean expensive upgrades. I bet that the cable companies are hoping this problem will magically go away at the end of the pandemic. But I’m guessing that people are going to continue to use upload speeds at levels far higher than before the pandemic. Meanwhile, if the cable companies were being honest, they would not be reporting 25 Mbps upload speeds to the FCC. (Just typing that made me chuckle because it’s not going to happen.)

Network Outages Go Global

On August 30, CenturyLink experienced a major network outage that lasted for over five hours and which disrupted CenturyLink customers nationwide as well as many other networks. What was unique about the outage was the scope of the disruptions as the outage affected video streaming services, game platforms, and even webcasts of European soccer.

This is an example of how telecom network outages have expanded in size and scope and can now be global in scale. This is a development that I find disturbing because it means that our telecom networks are growing more vulnerable over time.

The story of what happened that day is fascinating and I’m including two links for those who want to peek into how the outages were viewed by outsiders who are engaged in monitoring Internet traffic flow. First is this report from a Cloudflare blog that was written on the day of the outage. Cloudflare is a company that specializes in protecting large businesses and networks from attacks and outages. The blog describes how Cloudflare dealt with the outage by rerouting traffic away from the CenturyLink network. This story alone is a great example of modern network protections that have been put into place to deal with major Internet traffic disruptions.

The second report comes from ThousandEyes, which is now owned by Cisco. The company is similar to Cloudflare and helps clients deal with security issues and network disruptions. The ThousandEye report comes from the day after the outage and discusses the likely reasons for the outage. Again, this is an interesting story for those who don’t know much about the operations of the large fiber networks that constitute the Internet. ThousandEyes confirms the suspicions that were expressed the day before by Cloudflare that the issue was caused by a powerful network command issued by CenturyLink using Flowspec that resulted in a logic loop that turned off and restarted BGP (Border Gateway Protocol) over and over again.

It’s reassuring to know that there are companies like Cloudflare and ThousandEye that can stop network outages from permeating into other networks. But what is also clear from the reporting of the event is that a single incident or bad command can take out huge portions of the Internet.

That is something worth examining from a policy perspective. It’s easy to understand how this happens at companies like CenturyLink. The company has acquired numerous networks over the years from the old Qwest network up to the Level 3 networks and has integrated them all into a giant platform. The idea that the company owns a large global network is touted to business customers as a huge positive – but is it?

Network owners like CenturyLink have consolidated and concentrated the control of the network to a few key network hubs controlled by a relatively small staff of network engineers. ThousandEyes says that the CenturyLink Network Operation Center in Denver is one of the best in existence, and I’m sure they are right. But that network center controls a huge piece of the country’s Internet backbone.

I can’t find where CenturyLink ever gave the exact reason why the company issued a faulty Flowspec command. It may have been used to try to tamp down a problem at one customer or have been part of more routine network upgrades implemented early on a Sunday morning when the Internet is at its quietest. From a policy perspective, it doesn’t matter – what matters is that a single faulty command could take down such a large part of the Internet.

This should cause concerns for several reasons. First, if one unintentional faulty command can cause this much damage, then the network is susceptible to this being done deliberately. I’m sure that the network engineers running the Internet will say that’s not likely to happen, but they also would have expected this particular outage to have been stopped much sooner and easier.

I think the biggest concern is that the big network owners have adopted the idea of centralization to such an extent that outages like this one are more and more likely. Centralization of big networks means that outages can now reach globally and not just locally like happened just a decade ago. Our desire to be as efficient as possible through centralization has increased the risk to the Internet, not decreased it.

A good analogy for understanding the risk in our Internet networks comes by looking at the nationwide electric grid. It used to be routine to purposefully allow neighboring grids to automatically interact until it because obvious after some giant rolling blackouts that we needed firewalls between grids. The electric industry reworked the way that grids interact, and the big rolling regional outages disappeared. It’s time to have that same discussion about the Internet infrastructure. Right now, the security of the Internet is in the hands of few corporations that stress the bottom line first, and which have willingly accepted increased risk to our Internet backbones as a price to pay for cost efficiency.