The Beginnings of 8K Video

In 2014 I wrote a blog asking if 4K video was going to become mainstream. At that time, 4K TVs were just hitting the market and cost $3,000 and higher. There was virtually no 4K video content on the web other than a few experimental videos on YouTube. But in seven short years, 4K has become a standard technology. Netflix and Amazon Prime have been shooting all original content in 4K for several years, and the rest of the industry has followed. Anybody who purchased a TV since 2016 almost surely has 4K capabilities, and a quick scan of shopping sites shows 4K TVs as cheap as $300 today.

It’s now time to ask the same question about 8K video. TCL is now selling a basic 8K TV at Best Buy for $2,100. But like with any cutting-edge technology, LG is offering a top-of-the-line 8K TV on Amazon for $30,000. There are a handful of video cameras capable of capturing 8K video. Earlier this year, YouTube provided the ability to upload 8K videos, and a few are now available.

So what is 8K? The 8K designation refers to the number of pixels on a screen. High-Definition TV, or 2K, allowed for 1920 X 1080 pixels. 4K grew this to 3840 X 2160 pixels, and the 8K standard increases pixels to 7680 X 4320. An 8K video stream will have 4 pixels in the space where a high-definition TV had a single pixel.

8K video won’t only bring higher clarity, but also a much wider range of colors. Video today is captured and transmitted using a narrow range of red, green, blue, and sometimes white pixels that vary inside the limits of the REC 709 color specifications. The colors our eyes perceive on the screen are basically combinations of these few colors along with current standards that can vary the brightness of each pixel. 8K video will widen the color palette and also the brightness scale to provide a wider range of color nuance.

The reason I’m writing about 8K video is that any transmission of 8K video over the web will be a challenge for almost all current networks. Full HD video requires a video stream between 3 Mbps and 5 Mbps, with the highest bandwidth needs coming from a high-action video where the pixels on the stream are all changing constantly. 4K video requires a video stream between 15 Mbps and 25 Mbps. Theoretically, 8K video will require streams between 200 Mbps and 300 Mbps.

We know that video content providers on the web will find ways to reduce the size of the data stream, meaning they likely won’t transmit pure 8K video. This is done today for all videos, and there are industry tricks used, such as not transmitting background pixels in a scene where the background doesn’t change. But raw 4K or 8K video that is not filtered to be smaller will need the kind of bandwidth listed above.

There are no ISPs, even fiber providers, who would be ready for the largescale adoption of 8K video on the web. It wouldn’t take many simultaneous 8K subscribers in a neighborhood to exhaust the capability of a 2.4 Gbps node in a GPON network. We’ve already seen faster video be the death knell of other technologies – people were largely satisfied with DSL until people wanted to use it to view HD video – at that point, neighborhood DSL nodes got overwhelmed.

There were a lot of people in 2014 who said that 4K video was a fad that would never catch on. With 4K TVs at the time priced over $3,000 and a web that was not ready for 4K video streams, this seemed like a reasonable guess. But as 4K TV sets got cheaper and as Netflix and Amazon publicized 4K video capabilities, the 4K format has become commonplace. It took about five years for the 4K phenomenon to go from YouTube rarity to mainstream. I’m not predicting that the 8K trend could do the same thing – but it’s possible.

For years I’ve been advising to build networks that are ready for the future. We’re facing a possible explosion over the next decade of broadband demand from applications like 8K video and telepresence – both requiring big bandwidth. If you build a network today that is not contemplating these future needs, you are looking at being obsolete in a decade – likely before you’ve even paid off the debt on the network.

Demystifying Oversubscription

I think the concept that I have to explain the most as a consultant is oversubscription, which is the way that ISPs share bandwidth between customers in a network.

Most broadband technologies distribute bandwidth to customers in nodes. ISPs using passive optical networks, cable DOCSIS systems, fixed wireless technology, and DSL all distribute bandwidth to a neighborhood device of some sort that then distributes the bandwidth to all of the customers in that neighborhood node.

The easiest technology to demonstrate this with is passive optical fiber since most ISPs deliver nodes of only 32 people or less. PON technology delivers 2.4 gigabits of download bandwidth to the neighborhood node to share with 32 households.

Let’s suppose that every customer has subscribed to a 100 Mbps broadband service. Collectively, for the 32 households, that totals to 3.2 gigabits of demand – more than the 2.4 gigabits that is being supplied to the node. When people first hear about oversubscription, they think that ISPs are somehow cheating customers – how can an ISP sell more bandwidth than is available?

The answer is that the ISPs knows that it’s a statistical certainty that all 32 customers won’t use the full 100 Mbps download capacity at the same time. In fact, it’s rare for a household to ever use the full 100 Mbps capability – that’s not how the Internet works. Let’s say a given customer is downloading a huge file. Even if the ISP at the other end of that transaction has fast Internet, the signal doesn’t come pouring in from the Internet at a steady speed. Packets have to find a path between the sender and the receiver, and the packets come in unevenly, in fits and starts.

But that doesn’t fully explain why oversubscription works. It works because all of the customers in a node never use a lot of bandwidth at the same time. On a given evening, some of the people in the node aren’t at home. Some are browsing the web, which requires minimal download bandwidth. Many are streaming video, which requires a lot less than 100 Mbps. A few are using the bandwidth heavily, like a household with several gamers. But collectively, it’s nearly impossible for this particular node to use the full 2.4 gigabits of bandwidth.

Let’s instead suppose that everybody in this 32-home node has purchased a gigabit product, like is delivered by Google Fiber. Now, the collectively possible bandwidth demand is 32 gigabits, far greater than the 2.4 gigabits being delivered to the neighborhood node. This is starting to feel more like hocus pocus, because the ISP has sold 13 times the capacity that is available to the node. Has the ISP done something shady here?

The chances are extremely high that they have not. The reality is that the typical gigabit subscriber doesn’t use a lot more bandwidth than a typical 100 Mbps customer. And when the gigabit subscriber does download something, it does so quicker, meaning that the transaction has less of a chance of interfering with transactions from neighbors. Google fiber knows it can safely oversubscribe at thirteen to one because it knows from experience that there is rarely enough usage in the node to exceed the 2.4 gigabit download feed.

But it can happen. If this node is full of gamers, and perhaps a few super-heavy users like doctors that view bit medical files at home, this node could have problems at this level of oversubscription. ISPs have easy solutions for this rare event. The ISP can move some of the heavy users to a different node. Or the ISP can even split the node into two, with 16 homes on each node. This is why customers with a quality-conscious ISP rarely see any glitches in broadband speeds.

Unfortunately, this is not true with the other technologies. DSL nodes are overwhelmed almost by definition. Cable and fixed wireless networks have always been notorious for slowing down at peak usage times when all of the customers are using the network. Where a fiber ISP won’t put any more than 32 customers on a node, it’s not unusual for cable company to have a hundred customers.

Where the real oversubscription problems are seen today is on the upload link, where routine household demand can overwhelm the size of the upload link. Most households using DSL, cable, and fixed wireless technology during the pandemic have stories of times when they got booted from Zoom calls or couldn’t connect to a school server. These problems are fully due to the ISP badly oversubscribing the upload link.

The DOCSIS vs. Fiber Debate

In a recent article in FierceTelecom, Curtis Knittle, the VP of Wired Technologies at CableLabs, argues that the DOCSIS standard is far from over and that cable company coaxial cable will be able to compete with fiber for many years to come. It’s an interesting argument, and from a technical perspective, I’m sure Mr. Knittle is right. The big question will be if the big cable companies decide to take the DOCSIS path or bite the bullet and start the conversion to fiber.

CableLabs released the DOCSIS 4.0 standard in March 2020, and the technology is now being field tested in planned deployments through 2022. In the first lab deployment of the technology earlier this year, Comcast achieved a symmetrical 4 Gbps speed. Mr. Knittle claims that DOSIS 4.0 can outperform the XGS-PON we’re now seeing deployed. He claims that DOCSIS 4.0 will be able to produce a true 10-gigabit output while the XGS-PON actual output is closer to 8.7 Gbps downstream.

There are several issues that are going to drive the decision-making in cable company board rooms. The first is cost. An upgrade to DOCSIS 4.0 doesn’t sound cheap. The upgrade to DOCSIS 4.0 increases system bandwidth by working in higher frequencies – similar to G.Fast on telephone copper. A full upgrade to DOCSIS 4.0 will require ripping and replacing most network electronics. Coaxial copper networks are getting old and this probably also means replacing a lot of older coaxial cables in the network. It probably means replacing power taps and amplifiers throughout the outside network.

Building fiber is also expensive. However, the cable companies have surely learned the lesson from telcos like AT&T and Verizon that there is a huge saving in cost by overlashing fiber onto existing wires. The cable company can install fiber for a lot less than any competitor by overlashing onto existing coax.

There is also an issue of public perception. I think the public believes that fiber is the best broadband technology. Cable companies already see that they lose the competitive battle in any market where fiber is built. The big telcos all have aggressive plans to build fiber-to-the-premise, and there is a lot of fiber coming in the next five years. Other technologies like Starry wireless are also going to nibble away at the urban customer base. All of the alternative technologies to cable have faster upload speeds than the current DOCSIS technology. The cable industry has completely avoided talking about upload speeds because they know how cable subscribers struggled working and schooling from home during the pandemic. How many years can the cable company stave off competitors that offer a better experience?

There is finally the issue of speed to market. The first realistic date to start implementing DOCSIS 4.0 on a large scale is at least five years from now. That’s five long years to limp forward with underperforming upload speeds. Customers that become disappointed with an ISP are the ones that leap first when there is any alternative. Five years is a long time to cede the marketing advantage to fiber.

The big cable companies have a huge market advantage in urban markets – but they are not invulnerable. Comcast and Charter have both kept Wall Street happy by seeing continuous growth from the continuous capture of disaffected DSL customers. Wall Street is going to have a totally different view of the companies if that growth stops. The wheels likely come off stock prices if the two companies ever start losing customers.

I’ve always thought that the cable’s success for the last decade has been due more to having a lousy competitor in DSL than it has been by a great performance from the cable companies. Every national customer satisfaction poll continues to rank cable companies at the bottom behind even the IRS and funeral homes.

We know that fiber builders do well against cable companies. AT&T says that it gets a 30% market share in a relatively short time everywhere it builds fiber. Over time, AT&T thinks it will capture 50% of all subscribers with fiber, which means a 55% to 60% market share. The big decision for the cable companies to make is if they are willing to watch their market position start waning while waiting for DOCSIS 4.0. Are they going to bet another decade of success on aging copper networks? We’ve already seen Altice start the conversion to fiber. It’s going to be interesting to watch the other big cable companies wrestle with this decision.

Is Wireless Power a Possibility?

Wireless power transmission (WPT) is any technology that can transmit electrical power between two places without wires. As we are moving towards a future with small sensors in homes, fields, and factories, this is an area of research that is getting a lot more attention. The alternative to wireless power is to somehow put small batteries in sensors and devices that have to somehow periodically be replaced.

There are half a dozen techniques that can be used to create electric power remotely. Most involve transmitting some form of electromagnetic radiation that is used to excite a remote receiver that converts the energy into electricity. There have been trials using frequencies of all sorts, including microwaves, infrared light, and radio waves.

The most commonly used form of wireless power transmission today is used in wireless pads that can recharge a cellphone or other small devices. This technology uses inductive coupling. This involves passing alternating current through an induction coil. Since any moving electrical current creates a magnetic field, the induction coil creates a magnetic or electromotive field that fluctuates in intensity as the AC current constantly changes. A cellphone pad only works for a short distance because the coils inside the device are small.

There are a few household applications where induction charging works over slightly greater distances, such as automatically charging electric toothbrushes and some hand tools. We’ve been using the technology to recharge implanted medical devices since the 1960s. Induction charging has been implemented on a larger scale. In 1980, scientists in California developed a bus that could be recharged wirelessly. There is currently research in Norway and China to top off the charge in cars and taxi batteries to avoid having to stop to recharge electric vehicles.

There have successful uses of transmitted radiation to create remote electricity over great distances. Radio and microwaves can be beamed great distances to excite a device called a rectenna or rectifying antenna, which converts transmitted frequency into electricity. This has never been able to produce a lot of power, but scientists are looking at the technology again because this could be a way to charge devices like farm sensors in fields.

The private sector is exploring WPT solutions for everyday life. Wi-Charge is using safe infrared light to charge devices within a room. Energous has developed a radio transmitter that can charge devices within a 15-meter radius. Ossia is developing wireless charging devices for cars that will automatically charge cellphones and other consumer devices. We’re not far away from a time when motion detectors, smoke alarms, CO2 sensor,s and other devices can be permanently powered without a need for batteries or hardwiring.

Scientists and manufacturers are also exploring long-distance power transmission. Emrod in New Zealand is exploring bringing power to remote sites through the beaming of radio waves. On an even grander scale, NASA is exploring the possibility of beaming power to earth gathered from giant solar arrays in space.

Remote power was originally envisioned by Nicola Tesla, and perhaps over the next few decades will become an everyday technology that we take for granted. I’m just looking forward to the day when I’m not wakened in the middle of the night by a smoke detector that wants me to know it’s time to change the battery.

Are We Ready for Big Bandwidth Applications?

There is a recent industry phenomenon that could have major impacts on ISP networks in the relatively near future. There has been an explosion of households that subscribe to gigabit data plans. At the end of 2018, only 1.8% of US homes subscribed to a gigabit plan. This grew to 2.8% by the end of 2019. With the pandemic, millions of homes upgraded to gigabit plans in an attempt to find a service that would support working from home. By the end of the third quarter of 2020, gigabit households grew to 5.6% of all households, a doubling in nine months. But by the end of last year, this mushroomed to 8.5% of all households. OpenVault reports that as of the end of the first quarter of 2021 that 9.8% of all households have subscribed to gigabit plans.

I have to think that a lot of these upgrades came from homes that wanted faster upload speeds. Cable company broadband is stingy with upload speeds for basic 100 Mbps and 200 Mbps basic plans. Surveys my company has done show a lot of dissatisfaction with urban ISPs, and my guess is that most of that unhappiness is due to sluggish upload performance.

Regardless of how we found ourselves at this place, one out of ten households in the US now buys gigabit broadband. As an aside, that fact alone should completely eradicate any further discussions about 25/3 Mbps even being part of the discussion of broadband.

My ISP clients tell me that the average gigabit household doesn’t use a lot more bandwidth than customers buying 100 Mbps broadband – they just get things faster. If you’ve never worked on a gigabit connection, you might not understand the difference – but with gigabit broadband, websites appear on your screen almost instantaneously. The word I’ve always used to describe gigabit broadband is ‘snappy’. It’s like snapping your fingers and what you want appears instantly.

I think the fact that 10% of households have gigabit speeds opens up new possibilities for content providers. In the early days after Google Fiber got the country talking about gigabit fiber, the talking heads in the industry were all asking when we’d see gigabit applications. There was a lot of speculation about what those applications might do – but we never found out because nobody ever developed them. There was no real market for gigabit applications when only a handful of households were buying gigabit speeds. Even at the end of 2019, it was hard to think about monetizing fast web products when less than 3% of all homes could use them.

My instincts tell me that hitting a 10% market share for gigabit subscribers has created the critical mass of gigabit households that might make it financially worthwhile to offer fast web applications. The most likely first applications are probably telepresence and 3D gaming in your living room space. It’s hard to think that there is no market for this.

I know that ISPs are not ready for households to actually use the speeds they have been peddling to them. There is no ISP network anywhere, including fiber networks, that wouldn’t quickly bog down and die if a bunch of subscribers started streaming at fast speeds between 100 Mbps and a gigabit. ISP networks are designed around the concept of oversubscription – meaning that customers don’t use broadband at the same time. The normal parameters for oversubscription are already changing due to the proliferation of VPN connections made for working and schooling from home – ISPs must accommodate large chunks of bandwidth that are in constant use, and that can’t be shared with other customers. Home VPN connections have paralyzed DSL networks, but it’s something that even fiber network engineers are watching carefully.

I’ve been imagining what will happen to a network if households start streaming at a dedicated symmetrical 100 Mbps instead of connecting to Zoom at 2 Mbps. It wouldn’t take many such customers in any neighborhood to completely tie up network resources.

I will be shocked if there aren’t entrepreneurs already dreaming up gaming and telepresence applications that take advantage of the 10% market share for gigabit broadband. In looking back at the past, new technology phenomenon seems to hit almost overnight. It’s not hard to imagine a craze where a million gigabit homes are playing live 3D games in the living room air. When that finally happens,  ISPs are going to be taken by surprise, and not in a good way. We’ll see the instant introduction of data caps to stop customers from using broadband. But we’ll also see ISPs beefing up networks – they’ll have no choice.

Drone Research in North Carolina

The Platform for Advanced Wireless Research (PAWR) program, funded by the National Science Foundation, recently expanded the footprint for a wireless research trial in North Carolina. Labeled as the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW), a wireless testbed has been created in and around Cary, NC to also include the Lake Wheeler Field Laboratory and the Centennial Campus of NC State in Raleigh.

The wireless testbed is one of four created in the country. There will be a number of participants in the experiments, including NC State University, the Wireless Research Center of North Carolina, Mississippi State University, the Renaissance Computing Institute of the University of North Carolina at Chapel Hill, the town of Cary, the City of Raleigh, Purdue University, and the University of South Carolina.

The AERPAW program has been seeded with $24 million of grants from the National Science Foundation. The primary purpose of the North Carolina testbed is to explore the integration of drones with 5G wireless technology and to accelerate the development and commercialization of promising technologies.

The recent expansion came when the FCC named the area as an Innovation Zone as part of its Program Experimental Licenses. These licenses give the qualified research institutions the ability to conduct multiple and unrelated experiments over a range of frequency bands without having to ask permission from the FCC each time. The combination of the PAWR grants and the FCC Innovation Zone means an accelerated timeline for gaining access to the spectrum needed for experiments.

The North Carolina experiments are just getting underway and should be in full swing by 2023. There are already some interesting experiments being contemplated:

  • There will be an experiment to explore the feasibility of using drones to temporarily act as cell sites after the damage caused by hurricanes and other disasters. As our society becomes more reliant on 5G connectivity, there will be an urgency in restoring damaged cell sites quickly. The experiments will also consider the use of 5G cell sites mounted on cars, buses, and golf carts.
  • This same concept might be able to make a portion of a cellular network mobile, meaning the network could be shifted to serve increased demand when people and traffic are unexpectedly busy. Picture 5G drones flying over a football stadium.
  • There will be experiments to try to improve and guarantee the accuracy and ability of drones to delivery key packages such as medicines or commercial deliveries.
  • Research is contemplated to use drones to collect data from IoT sensors on nearby farms.
  • There is also an experiment envisioned that will look at ways to improve the ability of air traffic control to track and account for drones.

The PAWR platform is interesting in that commercial companies can request research into specific applications. In this case, a corporation could fund research into a specific use of drones, and the Innovation Zone means that any spectrum issues associated with trying new ideas can be accommodated. As might be expected, several wireless vendors are part of the platform. For example, Ericsson has installed 4G/5G RAN equipment at the Lake Wheeler site to initiate experimentation. Thirty-five vendors plan to participate in the four wireless testbeds around the country and might likely be the major beneficiaries of any technologies that prove to be viable.

This kind of research is vital if we are to develop wireless technologies for widespread use. There is only so much experimentation that can happen in labs, and this kind of testbed allows researchers to quickly identify both issues and benefits of 5G drone applications.

Satellite Broadband and Farming

An article in Via Satellite discussed John Deere’s interest in using low-orbit satellite broadband as the platform to expand smart farming. This makes perfect sense because there is no quicker way to bring broadband coverage to all of the farms in the world.

There are only a few other technologies that might be able to fill the farm broadband gap. The one we’ve been thinking about for the last decade is cellular, but there is a major reluctance of the big cellular companies to invest in the needed rural cellular towers and to constantly upgrade electronics for cell sites that have only a few customers. If you’ve ever traveled to the upper Midwest, you’ve stood on farm roads where agricultural fields stretch to the horizon in every direction. Standing in such a place makes you realize that cellular is not the answer. No carrier will invest in a cell tower where the potential customers are a few farms and the farm machinery. No farmer wants to pay a cellular bill large enough to support a single cell tower.

There is also experimentation with the use of permanent blimps that can provide broadband coverage over a several-county area. I wrote a blog last year about a trial with blimps in Indiana. It’s an interesting idea, and it may be a good way to support self-driving equipment. But the big downside to blimps is the ability to handle millions of sensors and to process huge amounts of data in real-time, like the monstrously large data files created by surveying fields for a wide variety of soil conditions. I also have to wonder about the ability to replicate this technology outside of prosperous farming areas in the US.

Before John Deere or anybody decides that satellites are the answer to smart agriculture, we need to know more about the real-life capabilities of satellites constellations. Big data users like cell sites and farms could bog down a satellite network and lower the usefulness for retail ISP services.

I think we’re going to quickly find out if satellite technology has a limited capacity – something we learned with earthbound networks a long time ago. An individual satellite is the equivalent of a broadband node device like a DSLAM for DSL or an OLT used in FTTP. Local broadband devices are subject to being overwhelmed by a few heavy data users. There is a reason that we use separate networks today to support cell towers and home broadband – because both would suffer by being on the same node.

The article raised this question and quoted a John Deere engineer as saying that the ideal situation would be for John Deer to own a satellite constellation. Putting the cost issue aside, space is going to get to be far too busy if we allow individual corporations to put up fleets of satellites. It’s not hard to imagine a John Deere constellation, a Verizon cellular constellation, a FedEx constellation, etc. The sky could get incredibly crowded.

The alternative is for the handful of satellite companies to sell access to corporations like John Deere. And that means somehow satisfying hugely different broadband needs from a satellite constellation. It’s not hard to imagine Starlink preferring big farms and cell sites over residential subscribers. Farmers and cellular companies are likely willing to spend more than households, but will expect priority service for the higher fees.

I’m sure that the satellite companies are flooded with unique requests for ways to use the satellites. If I owned a satellite constellation, I would likely pursue the most lucrative, highest-margin customers, and it’s hard to think somebody like Starlink won’t do the same. If they do, there will either be a limit on the number of residential customers they can accept or a degraded level of broadband for customers not willing to pay a premium corporate rate.

The Reemergence of Holding Times

There is an interesting phenomenon happening with ISP networks that I don’t see anybody discussing. During the last year, we saw a big change in the nature of our broadband usage in that many of us are connecting to remote work or school servers, or we are connecting to long Zoom calls.

We already can see that these changes have accelerated the average home usage of broadband. OpenSignal reports that the average broadband usage per home grew from 274 gigabytes per month just before the pandemic up to 462 gigabytes per month measured at the end of the first quarter of this year. Since much of the new usage came during the daytime, most ISPs reported that they were able to handle the extra usage. This makes sense because ISP networks in residential neighborhoods were relatively empty during the daytime before the pandemic – adding the additional usage at these non-busy times did not stress networks. Instead, the daytime hours started to become as busy as the evening hours, which have historically been the busiest time for residential networks.

But there is one impact of the way networks are now being used that is impacting ISPs. Before the pandemic, most of the use of the Internet in residential neighborhoods was bursty. People shopped or surfed the web, and each of these events resulted in short bursts to the Internet. Even video streaming is bursty – when you watch Netflix, you’re not downloading a video continuously. Instead, Netflix feeds you short, fast bursts of content that cache on your computer and keeps you ahead of what you are watching.

But our new network habits are very different. People are connecting to a school or work server with a VPN and keeping the connection for hours. Most Zoom video calls last 30 minutes to an hour. Suddenly, we’re using bandwidth resources for a long time.

In telephone networks, we used to refer to this phenomenon as holding times. Holding times were important because they helped to determine how many trunks, or external connections were needed to handle all of the demand. A longer holding time for a given kind of traffic meant that more external trunks were needed for that kind of calling. This is pure math – you can fit twice as many calls into an hour if the average holding time is five minutes instead of ten minutes. A telephone company would have multiple kinds of trunks leaving a central office – some trunks for local traffic between nearby exchanges and other trunks for different types of long-distance traffic. Traffic engineers measured average holding times to calculate the right number of trunks for each kind of traffic.

The fact that residents are maintaining Internet connections for hours is having the same kind of impact on broadband networks. The easiest place to understand this is in the neighborhood network. Consider a neighborhood served by DSL that has a DS3 backhaul provided by the telephone company – that’s 45 megabytes of capacity. Such a connection can support a lot of bursty traffic because requests to use the Internet come and go quickly. But the new, long-duration  broadband holding times can quickly kill a DSL neighborhood connection, as we saw during the pandemic. If only 20 homes in the neighborhood (which might consist of 100 homes) connect to a school or work server using a 2 Mbps connection, then 40 of the 45 megabytes is fully occupied for that use and can’t be used for anything else. It’s possible for this local network to become totally locked with heavy VPN usage.

This kind of network stress doesn’t just affect DSL networks, but every broadband technology. The connections inside the networks between homes and the hub have gotten far busier as people lock up Internet links for long periods of time. For technologies like DSL with small backhaul pipes, this phenomenon has been killing usage for whole neighborhoods. This is the phenomenon that killed the upload backhaul for cable companies. For technologies with larger backhaul bandwidth, this phenomenon means the backhaul paths are much fuller and will have to be upgraded a lot sooner than anticipated.

This phenomenon will ease somewhat if schools everywhere go live again. However, it appears that we’re going to continue to have people working from home. And video calling has moved into the mainstream. That means that backhaul connections inside ISP networks are a lot busier than any network engineer would have predicted just two years ago. While some of the extra traffic comes from increased broadband volumes, much of it is related to the much longer customer holding times – a term we’ve never used before with broadband networks.

Hollow Core Fiber

BT, formerly known as British Telecom has been working with Lumenisity to greatly improve the performance of hollow core fiber. This is fiber that takes advantage of the fact that light travels faster through air than it does through glass. In a hollow core fiber, air fills center tubes surrounded by glass. As can be seen by the picture accompanying this blog, multiple tubes of glass and air are created inside a single fiber creating a honeycomb effect.

There was news about hollow core fiber a decade ago when a lab at DARPA worked with Honeywell to improve the performance of the fiber. They found then that they could create a single straight path of light in the tubes that was perfect for military applications. The light could carry more bandwidth for greater distances without having to be regenerated. By not bouncing through glass, the signal maintained intensity for longer distances. DARPA found the fixed orientation of light inside the tubes to be of great value for communication with military-grade gyroscopes.

Until the recent breakthrough, the hollow tube fiber was plagued by periodic high signal loss when the light signal lost it’s straight-path coherence. Lumenisity has been able to lower signal loss to 1 dB per kilometer, which is still higher than the 0.2 dB loss expected for traditional fiber. However, the lab trials indicate that better manufacturing process should be able to significantly lower signal loss.

The Lumenisity breakthrough comes from the ability to combine multiple wavelengths of light while avoiding the phenomenon known as interwave mixing where different light frequencies interfere with each other. By minimizing signal dispersion, Lumenisity has eliminated the need for digital signal processors that are used in other fiber to compensate for chromatic dispersion. This means repeater sites that can be placed further apart and that require simpler and cheaper electronics.

Lumenisity doesn’t see hollow core fiber being used as a replacement on most fiber routes. The real benefits come in situations that require low latency along with high bandwidth. For example, the hollow core fiber might be used to feed the trading desks on Wall Street. The fiber might improve performance for fiber leaving big data centers.

Lumenisity is building a factory in the U.K. to manufacture hollow core fiber and expects to have it in mass production by 2023.

The Natural Evolution of Technology

I’ve been thinking lately about the future of current broadband technologies. What might the broadband world look like in twenty years?

The future of broadband technology will be driven by the continued growth in broadband demand, both in the amount of bandwidth we use and in the broadband speeds the public will demand. Technologies that can’t evolve to keep up with future demand will fade away – some slowly and some virtually overnight.

I don’t think it’s a big stretch to say that within twenty years that fiber will be king. There is a huge national push to build fiber now, with huge funding from federal and state grants, but also unprecedented amounts of commercial investment in fiber. Fiber will be built in a lot of rural America through subsidies and in a lot of small and medium towns because it makes financial sense. The big challenge will continue to be urban neighborhoods where fiber construction costs are high. Twenty years from now we’ll look back on today as the time when we finally embraced fiber, much like we look back twenty years ago when DSL and cable modems quickly killed dial-up.

It goes without saying that telephone copper will be dead in twenty years. To the extent copper is still on poles it will be used to support overlashed fiber. DSL will serve as the textbook posterchild about how technologies come and go. DSL is already considered as obsolete, a mere twenty years after introduction to the market. In twenty more years, it will be a distant memory.

I don’t see a big future for rural WISPs. These companies will not fare well in the fierce upcoming competition with fiber, low-orbit satellite, and even fixed cellular. Some stubborn WISPs will hang on with small market penetrations, but research into new and better radios will cease as demand for WISP services fade. The smart WISPs are going to move into towns and cities. WISPs willing to adapt to using millimeter-wave radios can grab a decent market share in towns by offering low prices to consumers who value price over big bandwidth. I predict that WISPs will replace DSL as the low-price competitor against the large ISPs in towns and cities.

Low orbit satellites will still serve the most remote customers in twenty years – but this won’t be the technology of choice due to what will be considered in the future as very slow bandwidth. Two decades from now, a 150 Mbps download connection is going to feel like today’s DSL. The satellite companies will thrive in the third world where they will be the ISP of choice for most rural customers. Interestingly, when I look out forty years, I think it’s likely that residential satellite broadband will fade into history. It’s hard to envision this technology can have a forty-year shelf life in a world where broadband demand continues to grow.

The technology that is hard to predict is cable broadband. From a technology perspective, it’s hard to see cable companies still wanting to maintain coaxial copper networks. In twenty years, these networks will be 70 years old. We don’t talk about it much, but age affects coaxial networks even more than telephone copper networks. Over the next decade, cable companies face a hard choice – convert to fiber or take one more swing at upgrading to DOCSIS 4.0 and its successors. It’s hard to imagine the giant cable companies like Comcast or Charter making the decision to go all fiber – they will worry too much about how the huge capital outlay will hurt their stock prices.

I expect there will still be plenty of coaxial networks around in twenty years. Unfortunately, I foresee that coaxial copper will stay in the poorest urban neighborhoods and smaller rural towns while suburbs and more affluent urban neighborhoods will see a conversion to fiber. For anybody who doesn’t think that can happen, I pointto AT&T history of DSL redlining. Cable companies might even decide to largely abandon poorer neighborhoods to WISPs and municipal fiber overbuilders, similar to the way that AT&T recently walked away from DSL.

It’s easy to think of technologies as being permanent and that any broadband technology used today will be around for a long time. One only has to look at the history of DSL to see that broadband technologies can reach great success only to be obsolete within just a few decades. We’re going to see the evolution of technology for as long as the demand for broadband continues to grow. Much of the technology being touted today as broadband solutions will quietly fade into obscurity over the next twenty years.

This is the biggest reason why I think that only technologies that can be relevant a decade or two from now should be eligible for federal grant funding. It’s shortsighted to give tax dollars to technologies that are not likely to be relevant in the somewhat near future. We saw a great example of that with the CAF II program that funded already-obsolete DSL. More recently saw federal grant money going to Viasat and to rural WISPs in the CAF II reverse auction. There are smarter ways to spend valuable tax dollars.