Voice over New Radio

I cut my teeth in this industry working with and for telephone companies. But telephone service is now considered by most in the industry to be a commodity barely worth any consideration – it’s just something that’s easy and works. Except when it doesn’t. Cellular carriers have run into problems maintaining voice calls when customers roam between the new 5G frequency bands and the older 4G frequencies.

Each of the cellular carriers has launched new frequency bands in the last few years and has labeled them as 5G. The new frequency bands are not really yet 5G because the carriers haven’t implemented any 5G features yet. But the carriers have implemented the new frequencies to be ready for full 5G when it finally arrives. The new frequencies are operated as separate networks and are not fully integrated into the traditional cellular network – in effect, cellular companies are now operating two side-by-side networks. They will eventually launch true 5G on the new frequencies and over time will integrate the 4G networks with the new 5G networks. It’s a smart migration plan.

The cellular carriers are seeing dropped voice calls when a customer roams and a voice connection is handed off between the two networks. Traditionally, roaming happened when a customer moved from one cellular site to a neighboring one. Roaming has gotten more complicated because customers can now be handed between networks while still using the same cell site. The coverage areas of the old and new frequencies are not the same, and customers roam when moving out of range of a given frequency or when hitting a dead spot. The most acute difference in coverage is between 4G coverage and the area covered by millimeter-wave spectrum being used in some center cities.

It turns out that a lot of telephone calls are dropped during the transition between the two networks. There has always been some small percentage of calls that get dropped while roaming, and we’ve each experienced times when we unexpectedly lost a voice call – but the issue is far more pronounced when roaming between the 5G and 4G networks.

The solution that has been created to fix the voice problems is labeled as Voice over New Radio (VoNR). The technology is bringing an old concept to the 5G networks. ISPs like cable companies and WISPs process IP voice calls through an IP Multimedia Core Network Subsystem (IMS). The IMS core used standard protocols like SIP (Session Initiation Protocol) to standardize the handoff of IP calls so that calls can be exchanged between disparate kinds of networks.

VoNR packetizes the media layer along with the voice signal. This embedded system means that a call that is transferred to 4G can quickly establish a connection with voice over LTE before the call gets dropped. This sounds like a simple concept, but on a pure IP network, it’s not easy to distinguish voice call packets from other data packets. That alone causes some of the problems on 5G because a voice call doesn’t get priority over other data packets. If a 5G signal weakens for any reason, a voice call suffers and can drop like any other broadband function. We barely notice when there is a hiccup when web browsing or watching a video, but even a quick temporary hiccup can end a voice call.

The new technology brings a promise of some interesting new functions to 5G. For example, it should be possible in the future to prioritize calls made to 911 so that they can’t be dropped. The new technology also will allow for improved voice quality and new features. For example, with 5G, there is enough bandwidth to create a conference call between multiple parties without losing call quality. This should also allow for establishing guaranteed voice and music connections while gaming or doing other data-intensive functions.

As an old telco guy, it’s a little nostalgic to see engineers working to improve the quality of voice. Over the last decades, we’ve learned to tolerate low-quality cellular voice connections, and we’ve mostly forgotten how good the connections used to be on our old black Bell rotary-dial phones. This isn’t one of the touted benefits of 5G, but perhaps Voice over New Radio can bring that back again.

Meet the Metaverse

I had already written this blog before Facebook announced it would be hiring at least 10,000 programmers to start moving the company towards the metaverse. I see the metaverse as one of the next big drivers of increased bandwidth usage. Wikipedia defines the metaverse as a collective virtual shared space created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual reality worlds, augmented reality, and the Internet. In the most basic sense, the metaverse consists of online worlds where people interact through avatars.

The early metaverse already includes platforms like Roblox, Fortnite, Decentraland, Upland, and Sandbox. These platforms have attracted millions of users who play games, interact with friends, buy and sell and barter goods and serves, all out of reach of the mainstream Internet.

Big companies see the metaverse as a huge source of future revenue. The Facebook announcement is the biggest but not the first announcement of corporations making a big metaverse play. Facebook joins Sony, Microsoft, Alphabet, Nvidia, and other corporations that are also betting big on the metaverse. It will be interesting to see if generation-Z will reject big corporate metaverse platforms in place of ones they create themselves.

The metaverse is at its infancy, but it is already much more than just a gaming platform. People buy, trade, and sell real-life assets on the platforms using real-world barter or cryptocurrency, and there are already successful metaverse merchants and traders. It’s been reported that a few people have made over $1 million buying and selling digital assets in Upland. One of the popular ways to trade for online assets is through the use of NFTs (non-fungible tokens) that are being used as an online currency. People are already spending a lot of money today to create their own digital world.

The big corporations are also banking on a lot more than gaming. They see an online world that could provide the next platform for business meetings and virtual connections. It’s a platform that can give everybody a front-row seat at a big concert. The companies investing in the space see it as an opportunity to develop new product lines to manage online payments, to authenticate users, to create content, to create security, and yes, to advertise. Nike recently released new Air Jordans strictly online in Fortnite. Gucci sold a virtual bag for more than the cost of the real thing in Roblox.

Many in generation-Z are already routinely immersed in metaverse platforms. I have a friend whose twenty-year-old spends a lot of time on today’s metaverse platforms because it’s a place where friends can interact outside of the tracking that is routinely done in the ad-driven Internet – it’s an online presence that is out of sight.

How will the metaverse impact broadband? Already today, a connection to a platform like Fortnight is 2-way. With today’s rudimentary graphics, the amount of bandwidth being used isn’t large and requires less bandwidth than graphic-intense cloud games. But the huge resources being poured into metaverse platforms means a big step-up in graphics, and thus in bandwidth. It’s already not untypical for a teen to have several metaverse platforms running at the same time – picture what that means to bandwidth in five or years if this is being done with 4K video.

I’ve said for twenty years that I want a holodeck. It’s likely that the closest I’ll get to that will be online through one of the future metaverse platforms. I’ll begrudgingly accept that if the experience can feel real enough for me to suspect my disbelief.

Aging Coaxial Copper Networks

We’re not talking enough about the aging coaxial copper networks that provide broadband to the majority of the broadband customers in the country. Comcast and Charter alone serve more than half of all broadband customers in the country.

These copper networks are getting old. Most coaxial networks were constructed in the 1970s and had an expected life of perhaps forty years. We seem to be quietly ignoring that these networks will soon be fifty years old.

There is an industry-wide consensus that telephone copper is past the end of its economic life, and most telephone networks that are still working are barely limping along. It was no surprise last October when AT&T announced that it would no longer connect new DSL customers – if the company had its way, it would completely walk away from all copper, other than as a convenient place to overlash fiber.

To some degree, coaxial networks are more susceptible to aging than telephone copper networks. The copper wire inside of a coax cable is much thicker than telephone copper wires, and that is what keeps the networks chugging along. However, coaxial networks are highly susceptible to outside interference. A coaxial network uses a technology that creates a captive RF radio network inside of the wires. This technology used the full range of radio spectrum between 5 MHz and 1,000 MHz inside the wires, with the signals arranged in channels, just like is done with wireless networks. A coaxial copper network is susceptible to outside interference anywhere along the wide range of frequencies being carried.

Decades of sun, cold, water, and ice accumulate to create slow deterioration of the coaxial copper and the sheathe around the wire. It’s vital for a coaxial sheathe to remain intact since it acts as the barrier to interference. As the sheath gets older, it develops breaks and cracks and is not as effective in shielding the network. The sheathing also accumulates breaks due to repairs over the decades from storms and other damage. Over forty or fifty years, the small dings and dents to the network add up. The long coaxial copper wires hanging on poles act as a giant antenna, and any break in the cable sheathing is a source for interference to enter the network.

Just like telcos never talk publicly about underperforming DSL, you never hear a cable that admits to poor performance in networks. But I’ve found that the performance of coaxial networks varies within almost every community of any size. I’m worked in several cities in recent years where we gathered speed tests by address, and there are invariably a few neighborhoods that have broadband speeds far slower than the rest of the network. The primary explanation for poorly performing neighborhoods is likely deterioration of the physical coaxial wires.

Cable companies could revitalize neighborhoods by replacing the coaxial cable – but they rarely do so. Anybody who has tried to buy house wiring knows that copper wiring has gotten extremely expensive. I haven’t done the math recently, but I wouldn’t be surprised if it costs much more to hang coaxial copper than fiber. You can see by the following chart how copper prices have peaked in recent years.

Big cable companies deliver decent bandwidth to a lot of people, but there are subsets of customers in most markets who have lousy service due to local network issues. I talk to cities all of the time who are tired of fielding complaints from the parts of town where networks underperform. City governments want to know when the cable companies will finally bite the bullet and upgrade to fiber. A lot of industry analysts seem to think the cable companies will put off upgrades for as long as possible, and that can’t be comforting to folks living in pockets of cable networks that already have degraded service. And as the networks continue to age, the problems experienced with coaxial networks will get worse every year.

Zayo Installs 800 Gbps Fiber

Zayo announced the installation of an 800 Gbps fiber route between New York and New Jersey. This is a big deal for a number of reasons. In my blog, I regularly talk about how home and business bandwidth has continued to grow and is doubling roughly every three years. It’s easy to forget that the traffic on the Internet backbone is experiencing the same growth. The routes between major cities like Washington DC and New York City are carrying 8-10 times more traffic than a decade ago.

Ten years ago, we were already facing a backhaul crisis on some of the busiest fiber routes in the country. The fact that some routes continue to function is a testament to smart network engineers and technology upgrades like the one announced by Zayo.

There is not a lot of new fiber construction along major routes in places like the northeast since such construction is expensive. Over the last few years, a major new fiber route was installed along the Pennsylvania Turnpike as that road was rebuilt – but such major fiber construction efforts are somewhat rare. That means that we must somehow handle the growth of intercity traffic with existing fiber routes that are already fully subscribed.

You might think that we could increase fiber capacity along major fiber routes by upgrading the bandwidth capacity, as Zayo is doing on this one route. But that is not a realistic option in most cases. Backhaul fiber routes can best be described as a hodge-podge. Let’s suppose as an example that Verizon owns a fiber route between New York City and Washington DC. The company would use some of the fibers on that route for its own cellular and FiOS traffic. But over the years, Verizon will have leased lit or dark fibers to other carriers. It wouldn’t be surprising on a major intercity route to find dozens of such leased arrangements. Each one of those long-term arrangements comes with different contractual requirements. Lit routes might be at specific bandwidths. Verizon would have no way of knowing what those leasing dark fiber are carrying.

Trying to somehow upgrade a major fiber route is a huge puzzle, largely confounded by the existing contractual arrangements. Many of the customers using lit fiber will have a five 9’s guarantee of uptime (99.999%), so it’s incredibly challenging to take such a customer out of service, even for a short time, as part of migrating to a different fiber or a different set of electronics.

Some of the carriers on the major transport routes sell transport to smaller entities. This would be carriers like Zayo, Level 3, and XO (which is owned by Verizon). These wholesale carriers are where smaller carriers go to find transport on these existing busy routes. That’s why it’s a big deal when Zayo and similar carriers increase capacity.

I wrote about the first 400 Gbps fiber path in March 2020, implemented by AT&T between Dallas and Atlanta. Numerous carriers have started the upgrade to 400 Gbps transport, including Zayo, which has plans to have that capacity on 21 major routes by the end of 2022. The 800 Gbps route is unique in that Zayo is able to combine two 400-Gbps fiber signals into one fiber path using electronics from Ciena. Verizon had a trial of 800 Gbps last year using equipment from Infinera.

In most cases, the upgrades to 400 Gbps or 800 Gbps will replace routes lit at the older standard 100 Gbps transport. While that sounds like a big increase in capacity, in a world where network capacity is doubling every three years, these upgrades are not a whole lot more than band-aids.

At some point, we’re going to need a major upgrade to intercity transport routes. Interestingly, all of the federal grant funding floating around is aimed at rural last-mile fiber – an obviously important need. Many federal funding sources can’t be used to build or upgrade middle-mile. But at some point, somebody is going to have to make the needed investments. It does no good to upgrade last-mile capacity if the routes between towns and the Internet can’t handle the broadband demand. This is probably not a role for the federal government because the big carriers make a lot of money on long-haul transport. At some point, the biggest carriers need to get into a room and agree to open up the purses – for the benefit of them all.

Improvements in Undersea Fiber

We often forget that a lot of things we do on the web rely on broadband traffic that passes through undersea cables. Any web traffic from overseas gets to the US through one of the many underwater fiber routes. Like with all fiber technologies, the engineers and vendors have regularly been making improvements.

The technology involved in undersea cables is quite different than what is used for terrestrial fibers. A  long fiber route includes repeater sites where the light signal is refreshed. Without repeaters, the average fiber light signal will die within about sixty miles. Our landline networks rely on powered repeater sites. For major cross-country fiber routes, multiple carriers often share the repeater sites.

But an undersea cable has to include the electric power and the repeater sites with the fiber since the cable may be laid as deep as 8,000 beneath the surface. HMN Tech recently announced a big improvement in undersea electronics technology. On a new underseas route between Hainan, China and Hong Kong, the company has been able to deploy 16 fibers with repeaters. This is a huge improvement over past technologies that have limited the number of fibers to eight or twelve. With 16 lit fibers, HMN will be able to pass data on this new route at 300 terabits per second.

Undersea fibers have a rough existence. There is a fiber cut somewhere in the world on underseas fiber every three days. There is a fleet of ships that travel the world fixing underseas fiber cuts or bends. Most underseas fiber problems come from the fiber rubbing against rocks on the seabed. But fibers are sometimes cut by ship anchors, and even occasionally by sharks that seem to like to chew on the fiber – sounds just like squirrels.

Undersea fibers aren’t large. Near to the shore, the fibers are about the width of a soda can, with most of the fiber made up of tough shielding to protect against dangers that come from the shallow waters near to shore. To the extent possible, an undersea fiber will be buried near shore. Further out to sea, the size of the fibers is much smaller, about the size of a pencil – there is no need to try to protect fibers that are deep on the ocean floor.

With the explosion in worldwide data usage, it’s vital that the cables can carry as much data as possible. The builders of the undersea routes only count on a given fiber lasting about ten years. The fiber will last longer, but the embedded electronics are usually too slow after a decade to justify continued use of the cable. Upgrading to faster technologies could mean a longer life for the undersea routes, which would be a huge economic benefit.

The Beginnings of 8K Video

In 2014 I wrote a blog asking if 4K video was going to become mainstream. At that time, 4K TVs were just hitting the market and cost $3,000 and higher. There was virtually no 4K video content on the web other than a few experimental videos on YouTube. But in seven short years, 4K has become a standard technology. Netflix and Amazon Prime have been shooting all original content in 4K for several years, and the rest of the industry has followed. Anybody who purchased a TV since 2016 almost surely has 4K capabilities, and a quick scan of shopping sites shows 4K TVs as cheap as $300 today.

It’s now time to ask the same question about 8K video. TCL is now selling a basic 8K TV at Best Buy for $2,100. But like with any cutting-edge technology, LG is offering a top-of-the-line 8K TV on Amazon for $30,000. There are a handful of video cameras capable of capturing 8K video. Earlier this year, YouTube provided the ability to upload 8K videos, and a few are now available.

So what is 8K? The 8K designation refers to the number of pixels on a screen. High-Definition TV, or 2K, allowed for 1920 X 1080 pixels. 4K grew this to 3840 X 2160 pixels, and the 8K standard increases pixels to 7680 X 4320. An 8K video stream will have 4 pixels in the space where a high-definition TV had a single pixel.

8K video won’t only bring higher clarity, but also a much wider range of colors. Video today is captured and transmitted using a narrow range of red, green, blue, and sometimes white pixels that vary inside the limits of the REC 709 color specifications. The colors our eyes perceive on the screen are basically combinations of these few colors along with current standards that can vary the brightness of each pixel. 8K video will widen the color palette and also the brightness scale to provide a wider range of color nuance.

The reason I’m writing about 8K video is that any transmission of 8K video over the web will be a challenge for almost all current networks. Full HD video requires a video stream between 3 Mbps and 5 Mbps, with the highest bandwidth needs coming from a high-action video where the pixels on the stream are all changing constantly. 4K video requires a video stream between 15 Mbps and 25 Mbps. Theoretically, 8K video will require streams between 200 Mbps and 300 Mbps.

We know that video content providers on the web will find ways to reduce the size of the data stream, meaning they likely won’t transmit pure 8K video. This is done today for all videos, and there are industry tricks used, such as not transmitting background pixels in a scene where the background doesn’t change. But raw 4K or 8K video that is not filtered to be smaller will need the kind of bandwidth listed above.

There are no ISPs, even fiber providers, who would be ready for the largescale adoption of 8K video on the web. It wouldn’t take many simultaneous 8K subscribers in a neighborhood to exhaust the capability of a 2.4 Gbps node in a GPON network. We’ve already seen faster video be the death knell of other technologies – people were largely satisfied with DSL until people wanted to use it to view HD video – at that point, neighborhood DSL nodes got overwhelmed.

There were a lot of people in 2014 who said that 4K video was a fad that would never catch on. With 4K TVs at the time priced over $3,000 and a web that was not ready for 4K video streams, this seemed like a reasonable guess. But as 4K TV sets got cheaper and as Netflix and Amazon publicized 4K video capabilities, the 4K format has become commonplace. It took about five years for the 4K phenomenon to go from YouTube rarity to mainstream. I’m not predicting that the 8K trend could do the same thing – but it’s possible.

For years I’ve been advising to build networks that are ready for the future. We’re facing a possible explosion over the next decade of broadband demand from applications like 8K video and telepresence – both requiring big bandwidth. If you build a network today that is not contemplating these future needs, you are looking at being obsolete in a decade – likely before you’ve even paid off the debt on the network.

Demystifying Oversubscription

I think the concept that I have to explain the most as a consultant is oversubscription, which is the way that ISPs share bandwidth between customers in a network.

Most broadband technologies distribute bandwidth to customers in nodes. ISPs using passive optical networks, cable DOCSIS systems, fixed wireless technology, and DSL all distribute bandwidth to a neighborhood device of some sort that then distributes the bandwidth to all of the customers in that neighborhood node.

The easiest technology to demonstrate this with is passive optical fiber since most ISPs deliver nodes of only 32 people or less. PON technology delivers 2.4 gigabits of download bandwidth to the neighborhood node to share with 32 households.

Let’s suppose that every customer has subscribed to a 100 Mbps broadband service. Collectively, for the 32 households, that totals to 3.2 gigabits of demand – more than the 2.4 gigabits that is being supplied to the node. When people first hear about oversubscription, they think that ISPs are somehow cheating customers – how can an ISP sell more bandwidth than is available?

The answer is that the ISPs knows that it’s a statistical certainty that all 32 customers won’t use the full 100 Mbps download capacity at the same time. In fact, it’s rare for a household to ever use the full 100 Mbps capability – that’s not how the Internet works. Let’s say a given customer is downloading a huge file. Even if the ISP at the other end of that transaction has fast Internet, the signal doesn’t come pouring in from the Internet at a steady speed. Packets have to find a path between the sender and the receiver, and the packets come in unevenly, in fits and starts.

But that doesn’t fully explain why oversubscription works. It works because all of the customers in a node never use a lot of bandwidth at the same time. On a given evening, some of the people in the node aren’t at home. Some are browsing the web, which requires minimal download bandwidth. Many are streaming video, which requires a lot less than 100 Mbps. A few are using the bandwidth heavily, like a household with several gamers. But collectively, it’s nearly impossible for this particular node to use the full 2.4 gigabits of bandwidth.

Let’s instead suppose that everybody in this 32-home node has purchased a gigabit product, like is delivered by Google Fiber. Now, the collectively possible bandwidth demand is 32 gigabits, far greater than the 2.4 gigabits being delivered to the neighborhood node. This is starting to feel more like hocus pocus, because the ISP has sold 13 times the capacity that is available to the node. Has the ISP done something shady here?

The chances are extremely high that they have not. The reality is that the typical gigabit subscriber doesn’t use a lot more bandwidth than a typical 100 Mbps customer. And when the gigabit subscriber does download something, it does so quicker, meaning that the transaction has less of a chance of interfering with transactions from neighbors. Google fiber knows it can safely oversubscribe at thirteen to one because it knows from experience that there is rarely enough usage in the node to exceed the 2.4 gigabit download feed.

But it can happen. If this node is full of gamers, and perhaps a few super-heavy users like doctors that view bit medical files at home, this node could have problems at this level of oversubscription. ISPs have easy solutions for this rare event. The ISP can move some of the heavy users to a different node. Or the ISP can even split the node into two, with 16 homes on each node. This is why customers with a quality-conscious ISP rarely see any glitches in broadband speeds.

Unfortunately, this is not true with the other technologies. DSL nodes are overwhelmed almost by definition. Cable and fixed wireless networks have always been notorious for slowing down at peak usage times when all of the customers are using the network. Where a fiber ISP won’t put any more than 32 customers on a node, it’s not unusual for cable company to have a hundred customers.

Where the real oversubscription problems are seen today is on the upload link, where routine household demand can overwhelm the size of the upload link. Most households using DSL, cable, and fixed wireless technology during the pandemic have stories of times when they got booted from Zoom calls or couldn’t connect to a school server. These problems are fully due to the ISP badly oversubscribing the upload link.

The DOCSIS vs. Fiber Debate

In a recent article in FierceTelecom, Curtis Knittle, the VP of Wired Technologies at CableLabs, argues that the DOCSIS standard is far from over and that cable company coaxial cable will be able to compete with fiber for many years to come. It’s an interesting argument, and from a technical perspective, I’m sure Mr. Knittle is right. The big question will be if the big cable companies decide to take the DOCSIS path or bite the bullet and start the conversion to fiber.

CableLabs released the DOCSIS 4.0 standard in March 2020, and the technology is now being field tested in planned deployments through 2022. In the first lab deployment of the technology earlier this year, Comcast achieved a symmetrical 4 Gbps speed. Mr. Knittle claims that DOSIS 4.0 can outperform the XGS-PON we’re now seeing deployed. He claims that DOCSIS 4.0 will be able to produce a true 10-gigabit output while the XGS-PON actual output is closer to 8.7 Gbps downstream.

There are several issues that are going to drive the decision-making in cable company board rooms. The first is cost. An upgrade to DOCSIS 4.0 doesn’t sound cheap. The upgrade to DOCSIS 4.0 increases system bandwidth by working in higher frequencies – similar to G.Fast on telephone copper. A full upgrade to DOCSIS 4.0 will require ripping and replacing most network electronics. Coaxial copper networks are getting old and this probably also means replacing a lot of older coaxial cables in the network. It probably means replacing power taps and amplifiers throughout the outside network.

Building fiber is also expensive. However, the cable companies have surely learned the lesson from telcos like AT&T and Verizon that there is a huge saving in cost by overlashing fiber onto existing wires. The cable company can install fiber for a lot less than any competitor by overlashing onto existing coax.

There is also an issue of public perception. I think the public believes that fiber is the best broadband technology. Cable companies already see that they lose the competitive battle in any market where fiber is built. The big telcos all have aggressive plans to build fiber-to-the-premise, and there is a lot of fiber coming in the next five years. Other technologies like Starry wireless are also going to nibble away at the urban customer base. All of the alternative technologies to cable have faster upload speeds than the current DOCSIS technology. The cable industry has completely avoided talking about upload speeds because they know how cable subscribers struggled working and schooling from home during the pandemic. How many years can the cable company stave off competitors that offer a better experience?

There is finally the issue of speed to market. The first realistic date to start implementing DOCSIS 4.0 on a large scale is at least five years from now. That’s five long years to limp forward with underperforming upload speeds. Customers that become disappointed with an ISP are the ones that leap first when there is any alternative. Five years is a long time to cede the marketing advantage to fiber.

The big cable companies have a huge market advantage in urban markets – but they are not invulnerable. Comcast and Charter have both kept Wall Street happy by seeing continuous growth from the continuous capture of disaffected DSL customers. Wall Street is going to have a totally different view of the companies if that growth stops. The wheels likely come off stock prices if the two companies ever start losing customers.

I’ve always thought that the cable’s success for the last decade has been due more to having a lousy competitor in DSL than it has been by a great performance from the cable companies. Every national customer satisfaction poll continues to rank cable companies at the bottom behind even the IRS and funeral homes.

We know that fiber builders do well against cable companies. AT&T says that it gets a 30% market share in a relatively short time everywhere it builds fiber. Over time, AT&T thinks it will capture 50% of all subscribers with fiber, which means a 55% to 60% market share. The big decision for the cable companies to make is if they are willing to watch their market position start waning while waiting for DOCSIS 4.0. Are they going to bet another decade of success on aging copper networks? We’ve already seen Altice start the conversion to fiber. It’s going to be interesting to watch the other big cable companies wrestle with this decision.

Is Wireless Power a Possibility?

Wireless power transmission (WPT) is any technology that can transmit electrical power between two places without wires. As we are moving towards a future with small sensors in homes, fields, and factories, this is an area of research that is getting a lot more attention. The alternative to wireless power is to somehow put small batteries in sensors and devices that have to somehow periodically be replaced.

There are half a dozen techniques that can be used to create electric power remotely. Most involve transmitting some form of electromagnetic radiation that is used to excite a remote receiver that converts the energy into electricity. There have been trials using frequencies of all sorts, including microwaves, infrared light, and radio waves.

The most commonly used form of wireless power transmission today is used in wireless pads that can recharge a cellphone or other small devices. This technology uses inductive coupling. This involves passing alternating current through an induction coil. Since any moving electrical current creates a magnetic field, the induction coil creates a magnetic or electromotive field that fluctuates in intensity as the AC current constantly changes. A cellphone pad only works for a short distance because the coils inside the device are small.

There are a few household applications where induction charging works over slightly greater distances, such as automatically charging electric toothbrushes and some hand tools. We’ve been using the technology to recharge implanted medical devices since the 1960s. Induction charging has been implemented on a larger scale. In 1980, scientists in California developed a bus that could be recharged wirelessly. There is currently research in Norway and China to top off the charge in cars and taxi batteries to avoid having to stop to recharge electric vehicles.

There have successful uses of transmitted radiation to create remote electricity over great distances. Radio and microwaves can be beamed great distances to excite a device called a rectenna or rectifying antenna, which converts transmitted frequency into electricity. This has never been able to produce a lot of power, but scientists are looking at the technology again because this could be a way to charge devices like farm sensors in fields.

The private sector is exploring WPT solutions for everyday life. Wi-Charge is using safe infrared light to charge devices within a room. Energous has developed a radio transmitter that can charge devices within a 15-meter radius. Ossia is developing wireless charging devices for cars that will automatically charge cellphones and other consumer devices. We’re not far away from a time when motion detectors, smoke alarms, CO2 sensor,s and other devices can be permanently powered without a need for batteries or hardwiring.

Scientists and manufacturers are also exploring long-distance power transmission. Emrod in New Zealand is exploring bringing power to remote sites through the beaming of radio waves. On an even grander scale, NASA is exploring the possibility of beaming power to earth gathered from giant solar arrays in space.

Remote power was originally envisioned by Nicola Tesla, and perhaps over the next few decades will become an everyday technology that we take for granted. I’m just looking forward to the day when I’m not wakened in the middle of the night by a smoke detector that wants me to know it’s time to change the battery.

Are We Ready for Big Bandwidth Applications?

There is a recent industry phenomenon that could have major impacts on ISP networks in the relatively near future. There has been an explosion of households that subscribe to gigabit data plans. At the end of 2018, only 1.8% of US homes subscribed to a gigabit plan. This grew to 2.8% by the end of 2019. With the pandemic, millions of homes upgraded to gigabit plans in an attempt to find a service that would support working from home. By the end of the third quarter of 2020, gigabit households grew to 5.6% of all households, a doubling in nine months. But by the end of last year, this mushroomed to 8.5% of all households. OpenVault reports that as of the end of the first quarter of 2021 that 9.8% of all households have subscribed to gigabit plans.

I have to think that a lot of these upgrades came from homes that wanted faster upload speeds. Cable company broadband is stingy with upload speeds for basic 100 Mbps and 200 Mbps basic plans. Surveys my company has done show a lot of dissatisfaction with urban ISPs, and my guess is that most of that unhappiness is due to sluggish upload performance.

Regardless of how we found ourselves at this place, one out of ten households in the US now buys gigabit broadband. As an aside, that fact alone should completely eradicate any further discussions about 25/3 Mbps even being part of the discussion of broadband.

My ISP clients tell me that the average gigabit household doesn’t use a lot more bandwidth than customers buying 100 Mbps broadband – they just get things faster. If you’ve never worked on a gigabit connection, you might not understand the difference – but with gigabit broadband, websites appear on your screen almost instantaneously. The word I’ve always used to describe gigabit broadband is ‘snappy’. It’s like snapping your fingers and what you want appears instantly.

I think the fact that 10% of households have gigabit speeds opens up new possibilities for content providers. In the early days after Google Fiber got the country talking about gigabit fiber, the talking heads in the industry were all asking when we’d see gigabit applications. There was a lot of speculation about what those applications might do – but we never found out because nobody ever developed them. There was no real market for gigabit applications when only a handful of households were buying gigabit speeds. Even at the end of 2019, it was hard to think about monetizing fast web products when less than 3% of all homes could use them.

My instincts tell me that hitting a 10% market share for gigabit subscribers has created the critical mass of gigabit households that might make it financially worthwhile to offer fast web applications. The most likely first applications are probably telepresence and 3D gaming in your living room space. It’s hard to think that there is no market for this.

I know that ISPs are not ready for households to actually use the speeds they have been peddling to them. There is no ISP network anywhere, including fiber networks, that wouldn’t quickly bog down and die if a bunch of subscribers started streaming at fast speeds between 100 Mbps and a gigabit. ISP networks are designed around the concept of oversubscription – meaning that customers don’t use broadband at the same time. The normal parameters for oversubscription are already changing due to the proliferation of VPN connections made for working and schooling from home – ISPs must accommodate large chunks of bandwidth that are in constant use, and that can’t be shared with other customers. Home VPN connections have paralyzed DSL networks, but it’s something that even fiber network engineers are watching carefully.

I’ve been imagining what will happen to a network if households start streaming at a dedicated symmetrical 100 Mbps instead of connecting to Zoom at 2 Mbps. It wouldn’t take many such customers in any neighborhood to completely tie up network resources.

I will be shocked if there aren’t entrepreneurs already dreaming up gaming and telepresence applications that take advantage of the 10% market share for gigabit broadband. In looking back at the past, new technology phenomenon seems to hit almost overnight. It’s not hard to imagine a craze where a million gigabit homes are playing live 3D games in the living room air. When that finally happens,  ISPs are going to be taken by surprise, and not in a good way. We’ll see the instant introduction of data caps to stop customers from using broadband. But we’ll also see ISPs beefing up networks – they’ll have no choice.