You’ve Got Mail

I’ve always been intrigued by the history of technology, and I think a lot of that is due to having almost everything computer-related happen during my lifetime. I missed a tech anniversary earlier this year when email turned 50.

It was April 1971 when software engineer Ray Tomlinson first used the @ symbol as the key to route a message between computers within ARPANET, the birthplace of the Internet. Tomlinson was working on a project at the U.S. Advanced Research Projects Agency that was developing a way to facilitate communications between government computers. There had been transmission of messages between computers starting in 1969, but Tomlinson’s use of the @ symbol has been identified as the birth of network email.

ARPA became DARPA when the military adopted the agency. DARPA kept a key role in the further development of email and created a set of email standards in 1973. These standards include things like having the “To” and “From” fields as headers for emails.

Email largely remained as a government and university protocol until 1989, when CompuServe made email available to its subscribers. CompuServe customers could communicate with each other, but not with the outside world.

In 1993, AOL further promoted email when every AOL customer was automatically given an email address. This led to the “You’ve got mail” slogan, and I can still hear the AOL announcement in my head today.

In 1996, Hotmail made a free email address available to anybody who had an Internet connection. Millions of people got email addresses, and the use of email went mainstream. If you ever used Hotmail, you’ll remember the note at the bottom of every email that said, “P.S. I love you. Get your free email here”. Hotmail was purchased by Microsoft in 1997 and was morphed over time into Outlook. This was one of the first big tech company acquisitions, at $400 million, which showed that huge value could be created by giving away web services for free.

In 1997, Yahoo launched a competing free email service that gave users even more options.

In 2004, Google announced its free Gmail service with the announcement that users could have a full gigabyte of storage, far more than anybody else offered.

Over the years, there have been many communications platforms launched that promised to displace email. This includes Facebook Messenger, WeChat, Slack, Discord, and many others. But with all of these alternate ways for people to communicate, email still reigns supreme and usage has grown every year since inception.

There are over 300 billion emails generated every day. Gmail alone has 1.6 billion email addresses, representing 20% of all people on the planet. In the workplace, the average American employee sends 40 emails each day and receives 121.

The beauty of email is its simplicity. It can work across any technology platform. It still uses HTML protocols to create headers and add attachments to an email. Routing is done with SMTP (Simple Mail Transfer Protocol) that allows messages to be sent to anybody else in the world.

On the downside, the ease of email has spawned spam when marketers found that they could sell even the most bizarre products if they sent enough emails. In recent time, emails have been used to implant malware on a recipient’s computer if they willingly open attachments.

One downside for the future of email is that many Americans under 30 hate using it. We’ll have to see over time if email gets displaced, but it would be a slow transition.

But email is still a powerful tool that is ingrained in our daily lives. Email was one of the early features that lured millions into joining the web. So happy birthday, email.

Using Private Rights-of-Way for Fiber

As if the broadband industry didn’t already have enough obstacles, a new issue has arisen in Virginia. A couple in Culpepper County, John and Cynthis Grano, have sued the Rappahannock Electric Cooperative to stop it from putting fiber on existing pole lines that are located on a private easement.

To put this lawsuit into perspective, Virginia law in the past would have required a utility to negotiate a private easement to gain access to the placement of utility networks on private land. But in 2020, the legislature passed a new law that allows electric and communications utilities to add fiber along existing aerial and buried rights-of-way without getting additional permission from property owners. This law was passed to make it easier to build fiber in rural Virginia.

The Cooperative was getting ready to embark on a $600 million rural fiber project to bring broadband to rural customers, but this lawsuit has caused the Cooperative to halt plans for now.

As is usual with lawsuits, there are always additional facts to consider. The rights-of-way in question are not along a road in a public right-of-way. Instead, the fiber route cuts across the landowner’s property, which also is the site for one of the Cooperative’s electric substations. Prior to the law being passed, the Cooperative had offered a $5,000 fee to use the rights-of-way on the property.

It might seem logical that the new law would have preempted this kind of lawsuit – because this situation is exactly what legislators had in mind when they passed the law. But I’ve learned in this industry that a new law is only truly secure after the law has been successfully tested in court.

This case has already made it through the first round of the courts, where a U.S. District Court sided with the property owners. The ruling said that the new law stripped the property owners of existing rights that had been established in the 1989 easement agreement with the Cooperative. The court said that landowners had lost property value even without the Cooperative trying to hang new fiber on existing easements.

This lawsuit has to bring a chill to any fiber builder in the country that relies on private rights-of-way and easements to build their project. The right to use public rights-of-way has been long established and cemented by challenges to laws early in the last century. This new Virginia law tried to grant the same status to private easements that have always been given to public rights-of-way – and that is a new area of law.

I would have to assume that for this issue to stop the fiber expansion that the Cooperative must have a lot of electric lines that use private rights-of-way. Electric grids routinely cross private land – the large tower transmission grids mostly use private rights-of-way, and utilities rarely build high-voltage routes along public roads. If the issue was only with this one farm, the Cooperative could probably bypass it, but I’m sure the issue applies to many other properties as well.

The lawsuit should raise a red flag for any ISP that has rights-of-way on private land. There are a lot more private easements in place than you might suppose. Many subdivisions own their own roads. Private roads are routine in rural areas. ISPs routinely rent land for huts and cabinets.

None of this will be any comfort to the many households that were slated to get fiber broadband. Electric cooperatives like Rappahannock are leading the way in a lot of rural America for bringing fiber to areas with little or no current broadband. Virginia has a state goal to solve the rural broadband gap by the end of 2024, and this lawsuit will put a damper on those plans. Just a little side note that will drive broadband advocates crazy, the property owner in this case has subscribed to Starlink and is not impacted by having to wait for better broadband.

Preparing for Storm Damage

After every major hurricane, like the category 4 Ida that recently hit Louisiana, there is talk in the telecom and power industries about ways to better protect our essential power and communication grids. There was major damage to grids and networks in Louisiana from hurricane winds and storm surges and massive flooding in the mid-Atlantic from Western Maryland to New York City.

One thing that we’ve learned over time is that there is no way to stop storm damage. Strong hurricanes, tornados, floods, and ice storms are going to create damage regardless of the steps we might take to try to keep aerial utilities safe. What matters most is the amount of time it takes to make repairs – obviously, the shorter, the better.

A natural impulse is to bury all of the aerial utilities. However, the cost to bury wires in many places is exorbitant. There are also issues during some weather events from buried facilities. It takes a long time and a lot of effort to find and fix problems in flooded areas. Buried electric lines are also sensitive to the saltwater corrosion that comes from storm surges from big coastal storms.

The Electric Power Research Institute (EPRI) has been working on ideas to better protect wires, poles, and towers during big storms. EPRI operates an outdoor laboratory in Lenox, Massachusetts, to create simulations of storm damage, EPRI’s goal is to find techniques to either minimize storm damage or shorten the time needed to make repairs.

EPRI research is intriguing to anybody that’s been in the industry. For example, they are exploring ways that towers and poles can be made to collapse in pre-planned ways rather than be destroyed by winds. A controlled collapse could avoid all of the problems of snapped and dangerous power lines. If done properly, EPRI is hoping there would be a way to stand up a downed pole in hours instead of days.

They are exploring a similar line of research, looking at ways for aerial wires to disconnect from poles before drastic damage occurs. This would stop the domino effect of multiple poles being broken and dragged down by a heavy tree landing on a pole span. It would be a lot easier to put fallen wires back onto poles than to untangle and splice wire breaks caused by catastrophic damage.

EPRI is also exploring other aspects that are needed to effectuate storm damage repair. For example, there is a white paper on the site that looks at the effectiveness of alternate telecommunications channels so that key players at a utility to communicate and coordinate after a storm. All of the normal modes of communication are likely to go dead when the wires and towers come tumbling down. The white paper looked at using GEO satellites, LEO satellites, and HF radios to communicate. The goal was to find a communications medium that would allow for a 3-way call after more conventional communication paths are out of service. The best solution was the high-orbit GEO satellites.

This kind of research is both important and interesting because coordination of repair efforts is one of the biggest challenges after every disaster. A utility can have standby crews ready to work, but nothing gets done until somebody can tell them what most needs to be addressed.

The Beginnings of 8K Video

In 2014 I wrote a blog asking if 4K video was going to become mainstream. At that time, 4K TVs were just hitting the market and cost $3,000 and higher. There was virtually no 4K video content on the web other than a few experimental videos on YouTube. But in seven short years, 4K has become a standard technology. Netflix and Amazon Prime have been shooting all original content in 4K for several years, and the rest of the industry has followed. Anybody who purchased a TV since 2016 almost surely has 4K capabilities, and a quick scan of shopping sites shows 4K TVs as cheap as $300 today.

It’s now time to ask the same question about 8K video. TCL is now selling a basic 8K TV at Best Buy for $2,100. But like with any cutting-edge technology, LG is offering a top-of-the-line 8K TV on Amazon for $30,000. There are a handful of video cameras capable of capturing 8K video. Earlier this year, YouTube provided the ability to upload 8K videos, and a few are now available.

So what is 8K? The 8K designation refers to the number of pixels on a screen. High-Definition TV, or 2K, allowed for 1920 X 1080 pixels. 4K grew this to 3840 X 2160 pixels, and the 8K standard increases pixels to 7680 X 4320. An 8K video stream will have 4 pixels in the space where a high-definition TV had a single pixel.

8K video won’t only bring higher clarity, but also a much wider range of colors. Video today is captured and transmitted using a narrow range of red, green, blue, and sometimes white pixels that vary inside the limits of the REC 709 color specifications. The colors our eyes perceive on the screen are basically combinations of these few colors along with current standards that can vary the brightness of each pixel. 8K video will widen the color palette and also the brightness scale to provide a wider range of color nuance.

The reason I’m writing about 8K video is that any transmission of 8K video over the web will be a challenge for almost all current networks. Full HD video requires a video stream between 3 Mbps and 5 Mbps, with the highest bandwidth needs coming from a high-action video where the pixels on the stream are all changing constantly. 4K video requires a video stream between 15 Mbps and 25 Mbps. Theoretically, 8K video will require streams between 200 Mbps and 300 Mbps.

We know that video content providers on the web will find ways to reduce the size of the data stream, meaning they likely won’t transmit pure 8K video. This is done today for all videos, and there are industry tricks used, such as not transmitting background pixels in a scene where the background doesn’t change. But raw 4K or 8K video that is not filtered to be smaller will need the kind of bandwidth listed above.

There are no ISPs, even fiber providers, who would be ready for the largescale adoption of 8K video on the web. It wouldn’t take many simultaneous 8K subscribers in a neighborhood to exhaust the capability of a 2.4 Gbps node in a GPON network. We’ve already seen faster video be the death knell of other technologies – people were largely satisfied with DSL until people wanted to use it to view HD video – at that point, neighborhood DSL nodes got overwhelmed.

There were a lot of people in 2014 who said that 4K video was a fad that would never catch on. With 4K TVs at the time priced over $3,000 and a web that was not ready for 4K video streams, this seemed like a reasonable guess. But as 4K TV sets got cheaper and as Netflix and Amazon publicized 4K video capabilities, the 4K format has become commonplace. It took about five years for the 4K phenomenon to go from YouTube rarity to mainstream. I’m not predicting that the 8K trend could do the same thing – but it’s possible.

For years I’ve been advising to build networks that are ready for the future. We’re facing a possible explosion over the next decade of broadband demand from applications like 8K video and telepresence – both requiring big bandwidth. If you build a network today that is not contemplating these future needs, you are looking at being obsolete in a decade – likely before you’ve even paid off the debt on the network.

Will Hyper-inflation Kill Broadband Projects?

The broadband industry is facing a crisis. We are poised to build more fiber broadband in the next few years than has been built over the last four decades. Unfortunately, this peak in demand hits a market that was already superheated, and at a time when pandemic-related supply chain issues are driving up the cost of broadband network components.

The numbers I am hearing from clients are truly disturbing. Just in the last few weeks, I’ve heard that the cost of conduit and resin-based components like handholes and pedestals is up 40%. I’ve heard mixed messages on fiber – some saying that prices are up as much as 20%, while others are seeing little price increases. I think the increases have to do with the specific kind of fiber being purchased as well as the history of the buyer – the biggest increases are going to new or casual fiber buyers. I’ve heard the cost of fiber-related components like pigtails is also way up.

The kinds of numbers I’m hearing can only be classified as hyper-inflation. It’s way outside the bounds of normalcy when the cost of something is up 20% to 40% in a year. I’ve been listening to a lot of economists lately who say that many price increases that are due to the pandemic are temporary in nature and that what we are seeing is a price bubble – they predict prices ought to revert to old levels over time in competitive markets where multiple providers of components will be bidding for future business.

But I keep looking at the upcoming plans for the country collectively to build fiber, and it looks like we might be seeing a superheated industry for the rest of this decade. When much of the rest of the economy gets back to normal, it’s not hard to envision that not being the case for fiber.

This leads me to ask if this hyper-inflation is going to kill fiber projects? I start with the RDOF winners – the amount of subsidy they get is fixed and won’t be adjusted for higher costs. At what point do some of the RDOF projects stop making sense? The business modelers for these companies must be working overtime to see if the projects still work with the higher costs. It won’t be shocking to see some of the RDOF winners give up and return Census blocks to the FCC.

But this same thing affects the winners of every other grant. Consider the recent grant filings with the NTIA. Those were some of the most generous grants ever awarded, with the NTIA program picking up as much as 75% of the cost of a project. What happens to the winners of those grants if materials are much more costly when they go to build the project? Any extra cost must be borne by the grant winner, meaning that the real matching could be a lot more than 25%. Some grant winners are going to have a hard time finding extra funding to cover the extra costs. Some of these projects are in extremely rural places, and one has to worry that having to pay extra might make the difference between a project making sense and not. Even with grants, it’s often a fine line between a project being feasible or not.

This same worry has to be spreading through the big ISPs. Companies like Frontier, Windstream, and Lumen are betting their future viability on building more fiber. How do those plans change when fiber is a lot more expensive?

The worst thing is that we have no idea where prices will peak. We’ve not really yet seen any market impact from RDOF and other big grant programs. We’ve seen some impact from CAREs spending, but that was a drop in the bucket compared to what we’re likely to see from ARPA and federal infrastructure spending.

I have a lot of nervous clients, and I have no good advice for them on this issue. Should they buy as much fiber and components as they can now before prices rise even more, or should they wait and hope for parts of the market to return to normalcy? What we’re experiencing is so far outside the bounds of normal that we have no basis for making decisions. I chatted with a few folks recently who speculated that the best investment they could make this year would be to buy $1 million of fiber reels and sit on them for a year – they might be right.

Video Meetings are the New Normal

One of the big changes that came out of the pandemic will have a permanent impact on broadband networks. Holding online meetings on Zoom, Microsoft Teams, GoToMeeting, and other video platforms has become a daily part of business for many companies.

This article in the New York Times cites predictions that businesses will cut down on travel by 20% to 50%. This will have a huge impact over time on the airline and hotel industries. As a lifelong road warrior, I recall the relief every year when the school year started back in September and airports returned mostly to business travelers. It will be interesting in the future if airports really get more deserted during the business-only travel months.

But the real boon for businesses from less travel will be lower expenses and increased productivity. I can’t add up the number of times that I traveled somewhere for a one or two-hour meeting – something that has now fallen off my radar. We’re going to replace rushing to make a flight with the use of broadband.

What is interesting is how hard we tried in the past to make video conferencing into an everyday thing. Everybody of my age remembers these AT&T commercials from 1993 that predicted that video conferencing, working remotely, digital books, and GPS navigation would become a part of daily life. Most of the predictions made by these commercials became a reality much sooner than common video calling. Whole new industries have been built around digital books, and GPS is seemingly built into everything.

The business world fought against video conferencing. I recall a client from 20 years ago who had invested in an expensive video conference setup and insisted on either meeting in person or holding a video conference. I recall the hassle of having to rent a local video conferencing center to talk to this client – but even then, I could see how that expense was far better than spending time a wasted day in an airport and a night in a hotel.

I don’t know how typical my workday is, but I probably average 3 hours per day on video calls. I always hated long telephone calls, but I like the experience of seeing who I’m talking to. It’s enabled creating real bonds with clients and colleagues as I talk to them multiple times through video chat compared to an occasional live meeting.

A few weeks ago, I wrote about the concept of broadband holding times to account for the fact that we are tying up broadband connections for hours with video chats or connecting to a work or school server. I’m not sure that we’ve fully grasped what this means for broadband networks. Most network engineers had metrics they used for estimating the amount of bandwidth required to serve a hundred or a thousand customers. That math goes out the door when a significant percentage of those customers are spending hours on video chats that use a small but continuous 2-way bandwidth connection.

We’re not likely to fully grasp what this means for another year until the pandemic is fully behind us, and companies settle into a new normal. I know I’m not going to be in airports in the future like I was in the past, and many people I’ve talked to feel the same way.

Multi-gigabit Broadband

There have been a few ISPs in recent years that have quietly offered residential broadband with speeds up to 10-gigabits. However, this year has seen an explosion of ISPs marketing multi-gigabit broadband.

I recall an announcement from Google Fiber last year offering an upgrade to 2-gigabit service in Nashville and Huntsville for $100 per month. Since then, the company has expanded the offer to other markets, including Atlanta, Charlotte, Kansas City, Raleigh-Durham, Austin, Salt Lake City, Provo, and Irvine.

Not to be outdone, Comcast Xfinity announced a 2-gigabit product, likely available in those markets where Google Fiber is competing. But Comcast doesn’t seem to really want to sell the product yet, having priced it at $299.95 per month. We saw the same high pricing when Comcast first introduced gigabit service – it gave them the bragging rights for having the fastest product, but the company was clearly not ready to widely sell it.

https://www.xfinity.com/gig

Midco, the cable company, markets speed up to 5-gigabits in places where it has built fiber. In recent months I’ve seen announcements from several rural cooperatives and telcos that are now offering 2-gigabit speed.

This feels like a largely marketing-driven phenomenon, with ISPs trying to distinguish themselves in the market. It was inevitable that we’d see faster speeds after the runaway popularity of 1-gigabit broadband. OpenVault reported that as of June of this year that 10.5% of all broadband subscribers are buying a gigabit product. It makes sense that some of these millions of customers could be lured to spend more for even faster speed.

There are still a lot of broadband critics who believe that nobody needs gigabit broadband. But you can’t scoff at a product that millions are willing to buy. Industry pundits thought Google Fiber was crazy a decade ago when it announced that its basic broadband speed was going to be 1-gigabit. At that time, most of the big cable companies had basic broadband products at 60 Mbps, with the ability to buy speeds as fast as 200 Mbps.

It was clear then and is still true today that a gigabit customer can rarely if ever, download from the web at a gigabit speed – the web isn’t geared to support that much speed the whole way through the network. But customers with gigabit broadband will tell you there is a noticeable difference between gigabit broadband and more normal broadband at 100 Mbps. The human eye can perceive the improvement that comes with gigabit speed.

The most aggravating thing about the debate about multi-gigabit speeds is how far the regulators have fallen behind the real world. According to OpenVault, the percentage of homes that subscribe to broadband with speeds of 100 Mbps or faster has grown to 80% of all broadband subscribers. We know in some markets that delivered speeds are less than advertised speeds – but the huge subscription levels are proof that subscribers want fast broadband.

Satellite Companies Fighting over RDOF

There has been an interesting public fight going on at the FCC as Viasat has been telling the FCC that Elon Musk’s Starlink should not be eligible for funding from the Rural Digital Opportunity Fund (RDOF). At stake is the $886 million that Starlink won in December’s RDOF auction that is still under review at the FCC.

Viasat had originally filed comments at the FCC stating that the company did not believe that Starlink could fulfill the RDOF requirements in some of the grant award areas. Viasat’s original filings listed several reasons why Starlink couldn’t meet its obligations, but the primary one was that Starlink technology was incapable of serving everybody in some of the more densely populated RDOF award areas. Viasat calculated the number of potential customers inside 22-kilometer diameter circles – the area that it says can be covered by one satellite. According to Viasat’s math, the most customers that could reasonably be served is 1,371 customers – and the company identified 17 RDOF areas with a greater number of households, with the maximum one having 4,126 locations.

There have been similar claims made by others in the industry who say that Starlink will be good for serving remote customers, but the technology is not capable of being the only ISPs in an area and serving most of the homes simultaneously.

Last month, Viasat made an additional claim that Starlink does not have sufficient backhaul bandwidth to serve a robust constellation. This stems from an ongoing tug-of-war at the FCC over 12 GHz spectrum. Starlink wants this spectrum to enable it to create more ground stations for transferring data to and from the satellite constellation. This is spectrum that Dish Networks owns that it wants to purpose for 5G. Dish Network has offered a spectrum-sharing plan that would greatly reduce Starlink’s use of the spectrum. The FCC filings on the topic are interesting reading, as wireless engineers on both sides of the issue essentially argue that everything the other side says is wrong. I’m not sure how the FCC ever decides which side is right.

The latest Viasat criticism of Starlink is based upon public statements made by Elon Musk at the Barcelona MWC conference, where he commented on how hard it is to fund the satellite business. Musk said that the business is likely to need between $20 billion and $30 billion in additional investment to reach the goal of over 11,000 satellites. Musk said his first priority is just to make sure that Starlink doesn’t go bankrupt. Viasat says that this is evidence that Starlink is a ‘risky venture’, something the FCC originally said should not be eligible for the federal RDOF subsidy.

Starlink recently asked the FCC to ignore everything that Viasat has filed and said that the Viasat comments are anti-competitive and are a ‘sideshow’. This has to be a huge puzzler for the FCC. We already see Starlink bringing good broadband to remote places that don’t have any broadband today. But the question in front of the FCC is not if Starlink can be a good ISP, but whether the company deserves a 10-year federal subsidy to support the business. Obviously, if Starlink needs at least $20 billion more to be viable, then getting or not getting the $886 million spread over ten years is not going to make a difference in whether Starlink makes it as a company.

The FCC is in a bind because many of these same issues were raised before the RDOF auction in an attempt by others to keep Starlink out of the auction. It wasn’t hard to predict that Starlink would win the subsidy in some of the most remote places in the country since it was willing to bid lower than other ISPs. The FCC voted to allow Starlink into RDOF just before the auction, and is now seeing that original decision challenged.

It’s also an interesting dilemma because of the possibility of an infrastructure plan by Congress that would fund fiber in most of the places won by Starlink. Would the FCC had allowed Starlink into the RDOF had it known about the possibility of such federal grants – I would have to guess not. The FCC is now faced with depriving areas from getting a permanent subsidy if they continue with the plan to give the RDOF to Starlink. That would just be bad policy.

Demystifying Oversubscription

I think the concept that I have to explain the most as a consultant is oversubscription, which is the way that ISPs share bandwidth between customers in a network.

Most broadband technologies distribute bandwidth to customers in nodes. ISPs using passive optical networks, cable DOCSIS systems, fixed wireless technology, and DSL all distribute bandwidth to a neighborhood device of some sort that then distributes the bandwidth to all of the customers in that neighborhood node.

The easiest technology to demonstrate this with is passive optical fiber since most ISPs deliver nodes of only 32 people or less. PON technology delivers 2.4 gigabits of download bandwidth to the neighborhood node to share with 32 households.

Let’s suppose that every customer has subscribed to a 100 Mbps broadband service. Collectively, for the 32 households, that totals to 3.2 gigabits of demand – more than the 2.4 gigabits that is being supplied to the node. When people first hear about oversubscription, they think that ISPs are somehow cheating customers – how can an ISP sell more bandwidth than is available?

The answer is that the ISPs knows that it’s a statistical certainty that all 32 customers won’t use the full 100 Mbps download capacity at the same time. In fact, it’s rare for a household to ever use the full 100 Mbps capability – that’s not how the Internet works. Let’s say a given customer is downloading a huge file. Even if the ISP at the other end of that transaction has fast Internet, the signal doesn’t come pouring in from the Internet at a steady speed. Packets have to find a path between the sender and the receiver, and the packets come in unevenly, in fits and starts.

But that doesn’t fully explain why oversubscription works. It works because all of the customers in a node never use a lot of bandwidth at the same time. On a given evening, some of the people in the node aren’t at home. Some are browsing the web, which requires minimal download bandwidth. Many are streaming video, which requires a lot less than 100 Mbps. A few are using the bandwidth heavily, like a household with several gamers. But collectively, it’s nearly impossible for this particular node to use the full 2.4 gigabits of bandwidth.

Let’s instead suppose that everybody in this 32-home node has purchased a gigabit product, like is delivered by Google Fiber. Now, the collectively possible bandwidth demand is 32 gigabits, far greater than the 2.4 gigabits being delivered to the neighborhood node. This is starting to feel more like hocus pocus, because the ISP has sold 13 times the capacity that is available to the node. Has the ISP done something shady here?

The chances are extremely high that they have not. The reality is that the typical gigabit subscriber doesn’t use a lot more bandwidth than a typical 100 Mbps customer. And when the gigabit subscriber does download something, it does so quicker, meaning that the transaction has less of a chance of interfering with transactions from neighbors. Google fiber knows it can safely oversubscribe at thirteen to one because it knows from experience that there is rarely enough usage in the node to exceed the 2.4 gigabit download feed.

But it can happen. If this node is full of gamers, and perhaps a few super-heavy users like doctors that view bit medical files at home, this node could have problems at this level of oversubscription. ISPs have easy solutions for this rare event. The ISP can move some of the heavy users to a different node. Or the ISP can even split the node into two, with 16 homes on each node. This is why customers with a quality-conscious ISP rarely see any glitches in broadband speeds.

Unfortunately, this is not true with the other technologies. DSL nodes are overwhelmed almost by definition. Cable and fixed wireless networks have always been notorious for slowing down at peak usage times when all of the customers are using the network. Where a fiber ISP won’t put any more than 32 customers on a node, it’s not unusual for cable company to have a hundred customers.

Where the real oversubscription problems are seen today is on the upload link, where routine household demand can overwhelm the size of the upload link. Most households using DSL, cable, and fixed wireless technology during the pandemic have stories of times when they got booted from Zoom calls or couldn’t connect to a school server. These problems are fully due to the ISP badly oversubscribing the upload link.

Farm Fresh Broadband

I was lucky enough to get an advanced copy of Farm Fresh Broadband by University of Virginia professor Christopher Ali. It’s a great read for anybody interested in rural broadband. The book is published by MIT Press and is now available for pre-order on Amazon.

The first half of the book discusses the history and the policies that have shaped rural broadband, and my review of his book will focus on this early discussion, which is near and dear to my heart. Ali hit on the same topics that I have been writing about in this blog for years. Of particular interest was Ali’s section talking about the policy failures that have led to the poor state of rural broadband today. Ali correctly points out that “we have a series of policies and regulations aimed at serving the interests of monopoly capital rather than the public interest and the public good”. Ali highlights the following policy failures that have largely created the rural digital divide:

  • Definition-Based Policy. The FCC has been hiding since behind its 25/3 Mbps definition of broadband since 2015. We still see this today when current federal grants all begin with this massively outdated definition of broadband when defining what is eligible for grant funding. We recently passed a milestone where over 10% of all households in the country are subscribed to gigabit broadband, and yet we are still trying to define rural broadband using the 25/3 Mbps standard. Unfortunately, sticking with this policy crutch has led to disastrously poor allocation of subsidies.
  • Technology Neutrality. Ali points to regulators who refuse to acknowledge that there are technologies and carriers that should not be worthy of federal subsidies. This policy is largely driven by lobbying by the big ISPs in the industry. This led to the completely wasted $10 billion CAF II subsidy that was given to shore up DSL at a time when it was already clear that DSL was a failure as a rural technology. This same lack of regulator backbone has continued as we’ve seen federal subsidies given to Viasat in the CAF II reverse auction and Starlink in the RDOF. The money wasted on these technologies could have instead been invested in bringing permanent broadband solutions to rural areas. It looks like Congress is going to continue this bow the big ISPs by allowing grants to be awarded for any technology that can claim to deliver 100/20 Mbps.
  • Mapping. Ali highlights the problems with FCC mapping that disguises the real nature of rural broadband. He points to the example of Louisa County, Virginia, where the FCC maps consider the county to have 100% broadband coverage at 25/3 Mbps. It turns out that 40% of this coverage comes from satellite broadband. Much of the rest comes from overstatements by the telcos in the county of the actual speeds. M-Lab speed tests show the average speeds in the county as 3.91 Mbps download and 1.69 Mbps upload – something that was not considered as broadband a decade ago by the FCC. Unfortunately, Louisa County is not unique, and there are similar examples all over the country where poor mapping policies have deflected funding away from the places that need it the most.
  • Localism. There are hundreds of examples where small regional telephone companies and cooperatives have brought great broadband to pockets of rural America. We have made zero attempt to duplicate and spread these success stories. In the recent CAF II awards we saw just the opposite, with huge amounts of money given to companies that are not small and not local. We already know how to fix rural broadband – by duplicating the way we electrified America by loaning money to local cooperatives. But regulators would rather hand out huge grants to giant ISPs. When we look back in a few decades at the results of the current cycle of grant funding, does anybody really believe that a big ISP like Charter will bring the same quality of service to communities as rural cooperatives?

The second half of the book is the really interesting stuff, and all I supply for that are some teasers. Ali describes why farmers badly need broadband. He describes the giant bandwidth needed for precision agriculture, field mapping, crop and livestock monitoring, and overall management of farms to maximize yields. One of the statistics he cites is eye-opening. Fully-deployed smart agriculture could generate 15 gigabytes of data annually for every 1,000 acres of fields. With current land under cultivation, that equates to more than 1,300 terabytes of data per year. We have a long way to go to bring farms the broadband they need to move into the future.

I do have one criticism of the book, and it’s purely personal. Ali has a huge number of footnotes from studies, articles, and reports – and it’s going to kill many of my evenings as I slog through the fascinating references that I’ve not read before.