Do We Still Need the Universal Service Fund?

There is currently a policy debate circulating asking who should pay to fund the FCC’s Universal Service Fund. For decades the USF has collected fees from telephone carriers providing landline and cellular phones – and these fees have been passed on to consumers. As landline telephone usage has continued to fall, the fees charged to customers have increased. There have been calls for years to fix the USF funding mechanism by spreading the fees more widely.

Since the fund today is mostly being used to support broadband, the most logical way to expand funding is by collecting the fee from ISPs – which would also likely pass the fees on to consumers. A new idea has surfaced that suggests that the USF should instead be funded by the biggest users of the Internet – being Netflix, Google, Facebook, and others. This argument was likely started by the big ISPs who wanted to deflect the fee obligations elsewhere. The argument is that the big web companies get tremendous benefits from the Internet without paying towards the basic infrastructure.

As I’ve read this back-and-forth debate, I was struck by a different thought. Instead of expanding funding for the USF, we ought to be talking about curtailing it. The Universal Service Fund is used for several purposes. USF funds the subsidies to get cheaper broadband for schools and libraries. The fund also pays for getting better broadband for rural health care facilities. These seem like worthwhile programs that should continue to be funded.

But the USF also has been supporting the Lifeline program that gives a $9.25 monthly discount to qualifying low-income homes. The amount of that monthly subsidy hasn’t been changed in years and has become more irrelevant over time. Some of the big ISPs have completely dropped out of the program, such as AT&T that ditched participation in most states where it is still a telco. There were always rumors that the fund included a lot of fraud – but we never saw enough detail to ever understand if this was true.

It seems like the current White House and Congress have a better alternative to Lifeline. The ARPA bill created the Emergency Broadband Benefit that gives low-income homes a $50 discount on broadband during the pandemic. Congress has suggested replacing that with a more permanent $30 discount. If Congress gets its act together and passes the infrastructure bill, then it’s time to have a serious talk about eliminating the FCC’s Lifeline program. There is no need to have both programs.

The final use of the Universal Service Fund is what I often refer to as the FCC’s slush fund. The FCC lets this fund accumulate and supposedly uses it to improve broadband in the country. But frankly, the FCC is terrible at this. Consider the history of this piece of the USF:

  • This money was originally intended to support rural telephone companies. State regulators capped telephone rates in most states in the range of $15 – $20 per month, and that was not enough revenue to support the telephone networks in high-cost areas. The Congress and the FCC had decided many years ago that the U.S. economy was best served if everybody was connected to the telephone network, and this might have been the biggest boon to rural America after electrification. This was an effective policy, and at one point, we had a 99% telephone penetration rate in the country. This fund was needed, but had big flaws. The FCC handed out the money based on formulas instead of looking at the need of individual telcos. This resulted in some telcos and commercial telco owners getting incredibly rich from an over-generous subsidy. There was never any serious attempt at the FCC to get this right.
  • But as landline telephone service has been transplanted by cellular service and VoIP, the FCC transitioned this fund to subsidize rural broadband. Perhaps the best use of the funding was the ACAM program that gave money to rural telcos, many of which leveraged the money and took big loans to build rural fiber. When people marvel at the amount of rural fiber in the Dakotas – it was funded by the ACAM program. But this plan also had some faults when some telcos used the ACAM to upgrade DSL and pocketed much of the subsidy.
  • After this, the FCC used the slush fund for a series of disastrous funding plans. The first was CAF II, where the FCC gave $11 billion to the largest telcos to upgrade rural DSL to 10/1 Mbps. This funding was given at a time when 10/1 Mbps was already too slow. A few telcos used the money properly but made little dent in improving broadband since tweaking out-of-date rural DSL didn’t make broadband much better. I’m not alone in thinking that some of the big telcos pocketed much of this money. They made a few cosmetic upgrades but largely used the money straight to bottom line profits. The FCC was so aghast at the way this funding was wasted that it tacked on an extra $2 billion payment to the telcos after the end of the program.
  • Next, the FCC held a small reverse auction with some money left over from CAF II. Some of this money went to worthwhile fiber projects, but money also went to ISPs like Viasat – a mind-numbing use of federal subsidies.
  • Next came the RDOF reverse auctions. I think we’ll look back a decade from now and judge that this funding did far more harm than good. If you follow my blog, you know I believe that the FCC mucked up this program in half a dozen ways, each of which will have long-term consequences in the neighborhoods where the FCC got it wrong.
  • Finally, the FCC tried to fund a $6 billion 5G fund that would have handed subsidies to cellular carriers to extend cell coverage into areas where it’s needed. But there was so much deception in the reporting of rural cellular speeds that the FCC finally pulled the plug on this – although I think this idea is likely to roar back to life one of these days.

The bottom line is that the FCC is incredibly inept in administering the slush fund. I don’t know why anybody would think that a regulatory agency made up mostly of industry lawyers could be the best place to entrust billions of dollars of broadband funding. It’s hard to imagine that the FCC could have done any worse over the last decade with this slush fund. I’m pretty sure that any six readers of this blog could have chatted over beers and come up with better ways to use the money.

So rather than have the debate of whether AT&T or Facebook should fund the Universal Service Fund – why don’t we have the debate about largely eliminating the fund? I can’t think of any reason why we should continue to let the FCC gum up rural subsidy programs. Let’s find a way to fund the school, libraries, and rural health care, and let’s get the FCC out of the business of goofing up subsidies.

An Update on Robocalling

The FCC has taken a number of actions against robocalling over the last year to try to tamp down on the practice, which every one of us hates. I’ve had the same cellular phone number for twenty-five years, and I attract far more junk calls every day than legitimate business calls.

The FCC has taken a number of specific actions, but so far this hasn’t made a big dent in the overall call volume. Actions taken so far include:

  • The FCC issued cease-and-desist letters to some of the biggest robocallers. For example, in May of this year, the agency ordered VaultTel to stop placing robocalls.
  • The FCC has been fining telemarketers with some of the biggest fines ever issued by the agency. This includes a $225 million fine against a Texas-based health insurance telemarketer for making over one billion spoofed calls. There have been other fines such as $120 million against a Florida time-share company and $82 million against a North Carolina health insurance company.
  • The FCC is hoping that its program for caller ID verification will tamp down significantly on the robocalls. This process, referred to as STIR/SHAKEN requires that underlying carriers verify that a call is originating from an authorized customer. The new protocol has already been implemented by the big carriers like AT&T, but smaller carriers were given more time. The FCC noted recently that it has seen a big shift of robocalling originating from smaller carriers that are not yet part of STIR/SHAKEN.
  • The agency has begun to coordinate efforts with law enforcement to track down and arrest robocallers who continue to flout the rules. That includes working with the U.S. Justice Department and State Attorney Generals.
  • The FCC also gave telephone companies permission to ‘aggressively block’ suspected robocalls. The agency has also encouraged telephone companies to offer advanced blocking tools to customers.

So far, the FCC actions haven’t made a big dent in robocalling. In 2020 we saw about 4 billion robocalls per day. The robocallers picked up the pace of calling in anticipation of getting shut down, and in March of this year, there were over 4.9 million robocalls placed. In the most recently completed month of August, we still saw 4.1 billion robocalls. It appears that the robocallers have just shifted their methods and are able, at this point, to avoid the STIR/SHAKEN restrictions from the big carriers. Hopefully, a lot of this will get fixed when that protocol is mandatory for everybody. The FCC recently announced that it was accelerating the implementation date for a list of carriers that the agency says is originating a lot of the robocalls.

The FCC knew from the start that this wasn’t going to be easy. The process of generating robocalls is now highly mechanized, and a few companies can generate a huge volume of calls. Apparently, the profits from doing this are lucrative enough for robocallers to flirt with the big FCC fines. When I searched Google for the keywords of robocaller and the FCC, the first thing at the top of the list was a company that is still selling robocalling.

We saw the same thing a few years ago with access stimulation, where a few unscrupulous companies and carriers were making big dollars from generating huge volumes of bogus calls in order to bill access charges.

Hopefully, the FCC can eventually put a big dent in robocalling. It’s hard to imagine that anybody is willing to answer a phone call from somebody they don’t know. Hopefully, more giant fines a few major convictions will convince the robocalling companies that it’s not worth it.

Improvements in Undersea Fiber

We often forget that a lot of things we do on the web rely on broadband traffic that passes through undersea cables. Any web traffic from overseas gets to the US through one of the many underwater fiber routes. Like with all fiber technologies, the engineers and vendors have regularly been making improvements.

The technology involved in undersea cables is quite different than what is used for terrestrial fibers. A  long fiber route includes repeater sites where the light signal is refreshed. Without repeaters, the average fiber light signal will die within about sixty miles. Our landline networks rely on powered repeater sites. For major cross-country fiber routes, multiple carriers often share the repeater sites.

But an undersea cable has to include the electric power and the repeater sites with the fiber since the cable may be laid as deep as 8,000 beneath the surface. HMN Tech recently announced a big improvement in undersea electronics technology. On a new underseas route between Hainan, China and Hong Kong, the company has been able to deploy 16 fibers with repeaters. This is a huge improvement over past technologies that have limited the number of fibers to eight or twelve. With 16 lit fibers, HMN will be able to pass data on this new route at 300 terabits per second.

Undersea fibers have a rough existence. There is a fiber cut somewhere in the world on underseas fiber every three days. There is a fleet of ships that travel the world fixing underseas fiber cuts or bends. Most underseas fiber problems come from the fiber rubbing against rocks on the seabed. But fibers are sometimes cut by ship anchors, and even occasionally by sharks that seem to like to chew on the fiber – sounds just like squirrels.

Undersea fibers aren’t large. Near to the shore, the fibers are about the width of a soda can, with most of the fiber made up of tough shielding to protect against dangers that come from the shallow waters near to shore. To the extent possible, an undersea fiber will be buried near shore. Further out to sea, the size of the fibers is much smaller, about the size of a pencil – there is no need to try to protect fibers that are deep on the ocean floor.

With the explosion in worldwide data usage, it’s vital that the cables can carry as much data as possible. The builders of the undersea routes only count on a given fiber lasting about ten years. The fiber will last longer, but the embedded electronics are usually too slow after a decade to justify continued use of the cable. Upgrading to faster technologies could mean a longer life for the undersea routes, which would be a huge economic benefit.

Technology Neutrality

Christopher Ali, a professor at the University of Virginia, says in his upcoming book Farm Fresh Broadband that technology neutrality is one of the biggest policy failures of our time. I completely agree, and today’s blog explores the concept and the consequences.

Over the last decade, every time that a pot of grant money has appeared on the horizon, we’ve heard talk at the FCC about making sure that there is technology neutrality when choosing the winners and losers of federal grants. This phrase had to be invented by one of the big ISPs because as is often typical of DC politics, the meaning of technology neutrality means exactly the opposite of what you might think it means.

Technology neutrality is a code word for allowing slower technologies to be funded from grants. The first time I remember hearing the phrase was in 2018, during the lead-up to the CAF II reverse auction. This was a $2 billion reverse auction for locations that hadn’t been claimed in the original FCC CAF II program. Many in the industry thought that federal grant funds ought to only be used to support forward-looking technologies. The term technology neutrality was used to support the argument that all ISPs and technologies should be eligible for grant funding. It was argued (mostly by ISPs that use slower technologies) that the FCC should not be in the game of picking winners and losers.

The technology neutrality proponents won the argument, and the FCC allowed technologies with capabilities as slow as 25/3 Mbps into the reverse auction. The results were what might be expected. Since lower-speed technologies tend to also be the least expensive to build, the slower technologies were able to win in a reverse auction format. It was not surprising at the end of that auction to see that three of the four top winners will collect $580 million to deploy slower technologies. This included fixed wireless providers AMG Technology (Nextlink) and WISPER, as well as high-orbit satellite provider Viasat.

The same argument arose again as the rules were being developed for the RDOF reverse auction. The first auction offered $14 billion in subsidies for ISPs to build last-mile broadband in places that the FCC thought had no broadband with speeds of at least 25/3 Mbps. The FCC heard testimony from the industry about the technologies that should be eligible for the subsidies. In the end, in the name of technology neutrality, the FCC allowed every technology into the reverse auction. The following is a quote from the FCC order that authorized the RDOF funding:

Although we have a preference for higher speeds, we recognize that some sparsely populated areas of the country are extremely costly to serve and providers offering only 25/3 Mbps may be the only viable alternative in the near term. Accordingly, we decline to raise the required speeds in the Minimum tier and we are not persuaded that bidders proposing 25/3 Mbps should be required to build out more quickly or have their support term reduced by half.

Again, it was not surprising to see that the list of RDOF winners included companies that will use the funding to build slower technologies, including fixed wireless and DSL. Only two of the top winners promised to build gigabit-capable broadband everywhere (a consortium of electric cooperatives and Charter). The FCC also decided at the last minute to allow Starlink into the auction – even those nobody knew at that time the speeds that could be delivered. The FCC really goofed up the technology issue by allowing some WISPs to bid and grab major winnings in the auction by promising to be able to deliver gigabit speeds with fixed wireless technology – a technology that doesn’t exist for a rural setting.

We recently saw the technology neutrality issue rear its head again in a big way. As the Senate was crafting legislation for a major infrastructure program, the original draft language included a requirement that any technologies built with the money should be able to immediately deliver speeds of 100/100 Mbps. That requirement would have locked out fixed wireless and cable companies from the funding – and likely also satellite companies. In backroom wrangling (meaning pressure from the big ISPs), the final legislation lowered that threshold to 100/20 Mbps.

The reason that Ali says that this is a policy failure is that the broadband policymakers are refusing to acknowledge the well-known fact that the need for broadband speeds continues to increase year after year. We just went through a miserable pandemic year where millions of homes struggled with inadequate upload broadband speeds, and yet the technology neutrality canard was rolled out yet again to justify building technologies that will be inadequate almost as soon as they are built. I would argue that the FCC has an obligation to choose technology winners and losers and shouldn’t waste federal broadband money on technologies that have no long-term legs. The decision by regulators and legislators to allow grant funding for slower technology means that the speed that current ISPs can deliver is being given priority over the speed people need.

The Pandemic and the Internet

Pew Research Center conducted several polls asking people about the importance of the Internet during the pandemic. The Pew survey report is seven pages filled with interesting statistics and a recommended read. This blog covers a few of the highlights.

The Overall Impact of the Internet. 58% of adults said that the Internet was essential during the pandemic – that’s up from 52% in April of 2020. Another 33% of adults say the Internet was important but not essential. Only 9% of adults said the Internet wasn’t important to them. The importance of the Internet varied by race, age, level of education, income, and location.

  • As might be expected, 71% of those under 30 found the Internet to be essential compared to 38% of those over 65.
  • 71% of those with a college degree found the internet to be essential versus 45% of those with a high school degree or less.
  • 66% of those in the upper third of incomes found the Internet to be essential compared to 55% of those in the lower third.
  • 61% of both urban and suburban residents found the Internet to be essential compared to 48% for rural residents.

Video Calling Usage Exploded. Possibly the biggest overall change in Internet usage has been the widespread adoption of video calling. 49% of adults made a video call at least once per week, with 12% doing so several times per day. The usage was most pronounced for those who work from home, with 79% making a video call at least once per week and 35% connecting multiple times per day.

Longing for a Return to Personal Interactions. Only 17% of Americans say that digital interactions have been as good as in-person contacts, while 68% say digital interactions are useful but no replacement for in-person contacts.

Challenges with Online Schooling. Only 18% of households said that online schooling went very well, with 45% saying it went somewhat well. 28% of households reported it was very easy to use the technology associated with online schooling, with another 42% saying it was somewhat easy. Twice as many people from the lower one-third of incomes said online schooling technology was difficult than those in the upper one-third of incomes. Nearly twice as many people in rural areas found online schooling technology to be a challenge compared to suburban residents.

Problems with Internet Connections. 49% of all survey respondents said they had problems with the internet connection during the pandemic. 12% experienced problems often.

Upgrading Internet. 29% of survey respondents said they did something to improve their Internet connection during the pandemic.

Affordability. 26% of respondents said they are worried about the ability to pay home broadband bills. This was 46% among those in the lower one-third of incomes.

Tech Readiness. 30% of Americans say they are not confident using computers, smartphones, or other connected electronics. This was highest for those over 75 (68%), those with a high school degree or less (42%), and those in the lower one-third of incomes (38%).

Treasury Defines Capital Project Fund Grants

The U.S. Department of the Treasury finally released the rules for the $10 billion Capital Projects Fund that will be distributed to states for broadband. The full rules are here.

This blog is not going to spit back all of the rules. Those have already been outlined well by others. Here is a great summary from Kevin Taglang from the Benton Institute.

States must apply to Treasury for the funds. The amount that each state can receive is here. A lot of the recently released rules tell states how to go about the process of claiming the money. States must make an application by December 27 and have until a year later to file details of the specific grants made within the state.

It’s hard to think that states won’t pursue this money, although a few small states might have problems finding enough eligible projects. I’m going to concentrate below on a few of the Treasury rules that will carry into state grant rules.

States Will Administer Grants. States will make awards to specific projects. Each state will need a grant program that follows the federal rules for this money. Since these new rules are different than the rules governing many existing state grant programs, the states will have to quickly adjust in order to follow these rules for at least this one grant. Some states are going to need legislative changes if current grant rules are established by the legislature.

Communities and States Can Define Eligible Areas. These grants do not use FCC mapping in determining eligibility. A grant area must only be shown to not have reliable 100/20 Mbps broadband in order to be eligible – and this is a very loose test. Treasury provides wide leeway in defining eligible areas, and almost any reasonable form of proof of poor broadband can suffice to prove an area is eligible. Of course, states will have some say in defining eligible areas, and I foresee a huge tug-of-war over this issue between state grant offices, communities, ISPs, and legislators.

Symmetrical Gigabit Speeds. Grant technologies must be able to provide symmetrical 100 Mbps speeds. There is going to cause confusion all over the industry as different grant programs have different speed requirements. This might also require legislative changes in some states. There is a provision that says that speeds can be slower where 100/100 Mbps isn’t practical, so expect a lot of challenges by ISPs trying to fund slower technologies.

Eligible Projects. A project must meet all of the following requirements: A project must be spent on capital assets that will enable work, education, and health monitoring. Projects must address a critical need that results from or was made obvious during the pandemic. Projects must address a critical community need.

Mostly for Infrastructure. Treasury wants a priority for last-mile infrastructure. States can request middle-mile projects, but Treasury must approve. Some money will be allowed for devices, but the state must retain ownership of devices. Money can go for improvement to government facilities that meet all of the eligibility rules.

No Required Matching. Treasury allows states to fund projects 100%, with no matching. But states might require matching in order to spread the grant benefits to more projects.

Some Prior Costs. Costs back to March 1, 2021 can be included in a grant under some circumstances. This might cover costs like a feasibility or engineering study.

Labor Standards. The rules do not mandate paying Davis-Bacon wages, but it encourages projects to pay a living wage.

Projects Completed by End of 2026. Projects must be completed by then, although Treasury has the ability to grant extensions.

Summary. I expect most states will grab the available funding. This funding should result in a state-administered grant program in every state in 2022 since states have to demonstrate having awarded this money by the end of next year. Since states are likely to put their own twist on these rules, keep an eye out for specific state rules. And start getting projects ready!

You’ve Got Mail

I’ve always been intrigued by the history of technology, and I think a lot of that is due to having almost everything computer-related happen during my lifetime. I missed a tech anniversary earlier this year when email turned 50.

It was April 1971 when software engineer Ray Tomlinson first used the @ symbol as the key to route a message between computers within ARPANET, the birthplace of the Internet. Tomlinson was working on a project at the U.S. Advanced Research Projects Agency that was developing a way to facilitate communications between government computers. There had been transmission of messages between computers starting in 1969, but Tomlinson’s use of the @ symbol has been identified as the birth of network email.

ARPA became DARPA when the military adopted the agency. DARPA kept a key role in the further development of email and created a set of email standards in 1973. These standards include things like having the “To” and “From” fields as headers for emails.

Email largely remained as a government and university protocol until 1989, when CompuServe made email available to its subscribers. CompuServe customers could communicate with each other, but not with the outside world.

In 1993, AOL further promoted email when every AOL customer was automatically given an email address. This led to the “You’ve got mail” slogan, and I can still hear the AOL announcement in my head today.

In 1996, Hotmail made a free email address available to anybody who had an Internet connection. Millions of people got email addresses, and the use of email went mainstream. If you ever used Hotmail, you’ll remember the note at the bottom of every email that said, “P.S. I love you. Get your free email here”. Hotmail was purchased by Microsoft in 1997 and was morphed over time into Outlook. This was one of the first big tech company acquisitions, at $400 million, which showed that huge value could be created by giving away web services for free.

In 1997, Yahoo launched a competing free email service that gave users even more options.

In 2004, Google announced its free Gmail service with the announcement that users could have a full gigabyte of storage, far more than anybody else offered.

Over the years, there have been many communications platforms launched that promised to displace email. This includes Facebook Messenger, WeChat, Slack, Discord, and many others. But with all of these alternate ways for people to communicate, email still reigns supreme and usage has grown every year since inception.

There are over 300 billion emails generated every day. Gmail alone has 1.6 billion email addresses, representing 20% of all people on the planet. In the workplace, the average American employee sends 40 emails each day and receives 121.

The beauty of email is its simplicity. It can work across any technology platform. It still uses HTML protocols to create headers and add attachments to an email. Routing is done with SMTP (Simple Mail Transfer Protocol) that allows messages to be sent to anybody else in the world.

On the downside, the ease of email has spawned spam when marketers found that they could sell even the most bizarre products if they sent enough emails. In recent time, emails have been used to implant malware on a recipient’s computer if they willingly open attachments.

One downside for the future of email is that many Americans under 30 hate using it. We’ll have to see over time if email gets displaced, but it would be a slow transition.

But email is still a powerful tool that is ingrained in our daily lives. Email was one of the early features that lured millions into joining the web. So happy birthday, email.

Using Private Rights-of-Way for Fiber

As if the broadband industry didn’t already have enough obstacles, a new issue has arisen in Virginia. A couple in Culpepper County, John and Cynthis Grano, have sued the Rappahannock Electric Cooperative to stop it from putting fiber on existing pole lines that are located on a private easement.

To put this lawsuit into perspective, Virginia law in the past would have required a utility to negotiate a private easement to gain access to the placement of utility networks on private land. But in 2020, the legislature passed a new law that allows electric and communications utilities to add fiber along existing aerial and buried rights-of-way without getting additional permission from property owners. This law was passed to make it easier to build fiber in rural Virginia.

The Cooperative was getting ready to embark on a $600 million rural fiber project to bring broadband to rural customers, but this lawsuit has caused the Cooperative to halt plans for now.

As is usual with lawsuits, there are always additional facts to consider. The rights-of-way in question are not along a road in a public right-of-way. Instead, the fiber route cuts across the landowner’s property, which also is the site for one of the Cooperative’s electric substations. Prior to the law being passed, the Cooperative had offered a $5,000 fee to use the rights-of-way on the property.

It might seem logical that the new law would have preempted this kind of lawsuit – because this situation is exactly what legislators had in mind when they passed the law. But I’ve learned in this industry that a new law is only truly secure after the law has been successfully tested in court.

This case has already made it through the first round of the courts, where a U.S. District Court sided with the property owners. The ruling said that the new law stripped the property owners of existing rights that had been established in the 1989 easement agreement with the Cooperative. The court said that landowners had lost property value even without the Cooperative trying to hang new fiber on existing easements.

This lawsuit has to bring a chill to any fiber builder in the country that relies on private rights-of-way and easements to build their project. The right to use public rights-of-way has been long established and cemented by challenges to laws early in the last century. This new Virginia law tried to grant the same status to private easements that have always been given to public rights-of-way – and that is a new area of law.

I would have to assume that for this issue to stop the fiber expansion that the Cooperative must have a lot of electric lines that use private rights-of-way. Electric grids routinely cross private land – the large tower transmission grids mostly use private rights-of-way, and utilities rarely build high-voltage routes along public roads. If the issue was only with this one farm, the Cooperative could probably bypass it, but I’m sure the issue applies to many other properties as well.

The lawsuit should raise a red flag for any ISP that has rights-of-way on private land. There are a lot more private easements in place than you might suppose. Many subdivisions own their own roads. Private roads are routine in rural areas. ISPs routinely rent land for huts and cabinets.

None of this will be any comfort to the many households that were slated to get fiber broadband. Electric cooperatives like Rappahannock are leading the way in a lot of rural America for bringing fiber to areas with little or no current broadband. Virginia has a state goal to solve the rural broadband gap by the end of 2024, and this lawsuit will put a damper on those plans. Just a little side note that will drive broadband advocates crazy, the property owner in this case has subscribed to Starlink and is not impacted by having to wait for better broadband.

Preparing for Storm Damage

After every major hurricane, like the category 4 Ida that recently hit Louisiana, there is talk in the telecom and power industries about ways to better protect our essential power and communication grids. There was major damage to grids and networks in Louisiana from hurricane winds and storm surges and massive flooding in the mid-Atlantic from Western Maryland to New York City.

One thing that we’ve learned over time is that there is no way to stop storm damage. Strong hurricanes, tornados, floods, and ice storms are going to create damage regardless of the steps we might take to try to keep aerial utilities safe. What matters most is the amount of time it takes to make repairs – obviously, the shorter, the better.

A natural impulse is to bury all of the aerial utilities. However, the cost to bury wires in many places is exorbitant. There are also issues during some weather events from buried facilities. It takes a long time and a lot of effort to find and fix problems in flooded areas. Buried electric lines are also sensitive to the saltwater corrosion that comes from storm surges from big coastal storms.

The Electric Power Research Institute (EPRI) has been working on ideas to better protect wires, poles, and towers during big storms. EPRI operates an outdoor laboratory in Lenox, Massachusetts, to create simulations of storm damage, EPRI’s goal is to find techniques to either minimize storm damage or shorten the time needed to make repairs.

EPRI research is intriguing to anybody that’s been in the industry. For example, they are exploring ways that towers and poles can be made to collapse in pre-planned ways rather than be destroyed by winds. A controlled collapse could avoid all of the problems of snapped and dangerous power lines. If done properly, EPRI is hoping there would be a way to stand up a downed pole in hours instead of days.

They are exploring a similar line of research, looking at ways for aerial wires to disconnect from poles before drastic damage occurs. This would stop the domino effect of multiple poles being broken and dragged down by a heavy tree landing on a pole span. It would be a lot easier to put fallen wires back onto poles than to untangle and splice wire breaks caused by catastrophic damage.

EPRI is also exploring other aspects that are needed to effectuate storm damage repair. For example, there is a white paper on the site that looks at the effectiveness of alternate telecommunications channels so that key players at a utility to communicate and coordinate after a storm. All of the normal modes of communication are likely to go dead when the wires and towers come tumbling down. The white paper looked at using GEO satellites, LEO satellites, and HF radios to communicate. The goal was to find a communications medium that would allow for a 3-way call after more conventional communication paths are out of service. The best solution was the high-orbit GEO satellites.

This kind of research is both important and interesting because coordination of repair efforts is one of the biggest challenges after every disaster. A utility can have standby crews ready to work, but nothing gets done until somebody can tell them what most needs to be addressed.

The Beginnings of 8K Video

In 2014 I wrote a blog asking if 4K video was going to become mainstream. At that time, 4K TVs were just hitting the market and cost $3,000 and higher. There was virtually no 4K video content on the web other than a few experimental videos on YouTube. But in seven short years, 4K has become a standard technology. Netflix and Amazon Prime have been shooting all original content in 4K for several years, and the rest of the industry has followed. Anybody who purchased a TV since 2016 almost surely has 4K capabilities, and a quick scan of shopping sites shows 4K TVs as cheap as $300 today.

It’s now time to ask the same question about 8K video. TCL is now selling a basic 8K TV at Best Buy for $2,100. But like with any cutting-edge technology, LG is offering a top-of-the-line 8K TV on Amazon for $30,000. There are a handful of video cameras capable of capturing 8K video. Earlier this year, YouTube provided the ability to upload 8K videos, and a few are now available.

So what is 8K? The 8K designation refers to the number of pixels on a screen. High-Definition TV, or 2K, allowed for 1920 X 1080 pixels. 4K grew this to 3840 X 2160 pixels, and the 8K standard increases pixels to 7680 X 4320. An 8K video stream will have 4 pixels in the space where a high-definition TV had a single pixel.

8K video won’t only bring higher clarity, but also a much wider range of colors. Video today is captured and transmitted using a narrow range of red, green, blue, and sometimes white pixels that vary inside the limits of the REC 709 color specifications. The colors our eyes perceive on the screen are basically combinations of these few colors along with current standards that can vary the brightness of each pixel. 8K video will widen the color palette and also the brightness scale to provide a wider range of color nuance.

The reason I’m writing about 8K video is that any transmission of 8K video over the web will be a challenge for almost all current networks. Full HD video requires a video stream between 3 Mbps and 5 Mbps, with the highest bandwidth needs coming from a high-action video where the pixels on the stream are all changing constantly. 4K video requires a video stream between 15 Mbps and 25 Mbps. Theoretically, 8K video will require streams between 200 Mbps and 300 Mbps.

We know that video content providers on the web will find ways to reduce the size of the data stream, meaning they likely won’t transmit pure 8K video. This is done today for all videos, and there are industry tricks used, such as not transmitting background pixels in a scene where the background doesn’t change. But raw 4K or 8K video that is not filtered to be smaller will need the kind of bandwidth listed above.

There are no ISPs, even fiber providers, who would be ready for the largescale adoption of 8K video on the web. It wouldn’t take many simultaneous 8K subscribers in a neighborhood to exhaust the capability of a 2.4 Gbps node in a GPON network. We’ve already seen faster video be the death knell of other technologies – people were largely satisfied with DSL until people wanted to use it to view HD video – at that point, neighborhood DSL nodes got overwhelmed.

There were a lot of people in 2014 who said that 4K video was a fad that would never catch on. With 4K TVs at the time priced over $3,000 and a web that was not ready for 4K video streams, this seemed like a reasonable guess. But as 4K TV sets got cheaper and as Netflix and Amazon publicized 4K video capabilities, the 4K format has become commonplace. It took about five years for the 4K phenomenon to go from YouTube rarity to mainstream. I’m not predicting that the 8K trend could do the same thing – but it’s possible.

For years I’ve been advising to build networks that are ready for the future. We’re facing a possible explosion over the next decade of broadband demand from applications like 8K video and telepresence – both requiring big bandwidth. If you build a network today that is not contemplating these future needs, you are looking at being obsolete in a decade – likely before you’ve even paid off the debt on the network.