The 12 GHz Battle

A big piece of what the FCC does is to weigh competing claims to use spectrum. It seems like there have been non-stop industry fights over the last decade on who gets to use various bands of spectrum. One of the latest fights, which is the continuation of a fight going on since 2018, is for the use of the 12 GHz spectrum.

The big wrestling match is between Starlink’s desire to use the spectrum to communicate with its low-orbit satellites and cellular carriers and WISPs who want to use the spectrum for rural broadband. Starlink uses this spectrum to connect its ground-based terminals to satellites. Wireless carriers argue that the spectrum should also be shared to enhance rural broadband networks.

The 12 GHz band is attractive to Starlink because it contains 500 MHz of contiguous spectrum with 100 MHz channels – a big data pipe for reaching between satellites and earth. The spectrum is attractive to wireless ISPs for these same reasons, along with other characteristics. The 12 GHz spectrum will carry twice as far as the other spectrum in point-to-multipoint broadband networks, meaning it can cover four times the area from a given tower. The spectrum is also clear of any federal or military encumbrance – something that restricts other spectrum like CBRS. The spectrum also is being used for cellular purposes internationally, which makes for an easy path to find the radios and receivers to use it.

In the current fight, Starlink wants exclusive use of the spectrum, while wireless carriers say that both sides can share the spectrum without much interference. These are always the hardest fights for the FCC to figure out because most of the facts presented by both sides are largely theoretical. The only true way to find out about interference is in real-world situations – something that is hard to simulate any other way,

A few wireless ISPs are already using the 12 GHz spectrum. One is Starry, which has recently joined the 12 GHz Coalition, the group lobbying for terrestrial use of the spectrum. This coalition also includes other members like Dish Networks, various WISPs, and the consumer group Public Knowledge. Starry is one of the few wireless ISPs currently using millimeter-wave spectrum for broadband. The company added almost 10,000 customers to its wireless networks in the second quarter and is poised to grow a lot faster. If the FCC opens the 12 GHz spectrum to all terrestrial uses, it seems likely that use of the spectrum would quickly be used in many rural areas.

As seems usual these days, both sides in the spectrum fight say that the other side is wrong about everything they are saying to the FCC. This must drive the engineers at the FCC crazy since they have to wade through the claims made by both sides to get to the truth. The 12 GHz Coalition has engineering studies that show that the spectrum could coexist with satellite usage with a 99.85% assurance of no interference. Starlink, of course, says that engineering study is flawed and that there will be significant interference. Starlink wants no terrestrial use of the spectrum.

On the flip side, the terrestrial ISPs say that the spectrum in dispute is only 3% of the spectrum portfolio available to Starlink, and the company has plenty of bandwidth and is being greedy.

I expect that the real story is somewhere in between the stories told by both sides. It’s these arguments that make me appreciate the FCC technical staff. It seems every spectrum fight has two totally different stories defending why each side should be the one to win use of spectrum.

The Proliferation of Microtrenching

There is an interesting new trend in fiber construction. Some relatively large cities are getting fiber networks using microtrenching. Just in the last week, I’ve seen announcements of plans to use microtrenching in cities like Mesa, Arizona, and Sarasota Springs, New York. In the past the technology was used for new fiber networks in Austin, Texas, San Antonia, Texas, and Charlotte, North Carolina.  I’ve seen recent proposals made to numerous cities to use microtrenching to build new fiber networks.

Microtrenching works by cutting a narrow cut an inch or two wide and up to a foot deep for the placement of fiber cables. The trench is then sealed with a special epoxy that is supposed to bind the hole to be as strong as before the cut.

Microtrenching got a bad name a few years back when Google Fiber walked away from a botched microtrenched network in Louisville, Kentucky. The microtrenching method used allowed water to seep into the narrow trenches, and the freezing and thawing during the winter caused the plugs and the fibers to heave from the small trenches. The vendors supporting the technology say they have solved the problems that surfaced in the Louisville debacle.

There is no doubt that microtrenching is faster than the more traditional method of boring and placing underground conduit. A recent article cited Ting as saying that a crew can microtrench 3,000 feet of fiber per day compared to 500 feet with traditional boring. Since a big part of the cost of building a network is labor, that can save a lot of money for fiber construction.

I’ve worked with cities that have major concerns about microtrenching. A microtrench cut is generally made in the street just a few inches from the curb. Cities worry since they have to routinely cut the streets in this same area to repair water leaks or to react to gas main leaks. In many cases, such repair cuts are made hurriedly, but even if they aren’t, it’s nearly impossible to dig down a few feet with a backhoe and not cut shallow fiber. This means a fiber outage every time a city or a utility makes such a cut in the street, with the outage likely lasting from a few days to a few weeks.

The bigger concern for cities is the durability of the microtrenched cuts. Even if the technology has improved, will the epoxy plug stay strong and intact for decades to come? Every city engineer gets nervous seeing anybody with plans to make cuts in fairly pristine city streets.

City engineers also get nervous when new infrastructure is placed at a depth they don’t consider as ideal. Most cities require that a fiber network be placed three feet or deeper below other utilities like water and gas. They understand how many cuts are made in streets every year, and they can foresee a lot of problems coming with a fiber network that gets regularly cut. City engineers do not want to be the ones constantly blamed for fiber outages.

There are new techniques that might make microtrenching less worrisome. In Sarasota Springs, New York, SiFi is microtrenching in the greenways – the space between the curb and the sidewalks. The company says it has a new technique to be able to feed fiber under and around tree roots without harming them, thus minimizing damage to tree while avoiding using the city streets. This construction method doesn’t sound as fast as microtrenching at full speed down a street, but it seems like a technique that would eliminate most of the worries of the civil engineers – assuming it really doesn’t kill all the trees.

It probably will take some years to find out in a given city if microtrenching was a good solution. The willingness to take a chance demonstrates how badly cities want fiber everywhere – after all, civil engineers are not known as risk takers. I have to imagine that in many cases that the decision to allow microtrenching is being approved by somebody other than the engineers.

Unlicensed Spectrum and BEAD Grants

There is a growing controversy brewing about the NTIA’s decision to declare that fixed wireless technology using only unlicensed spectrum is unreliable and not worthy of funding for the BEAD grants. WISPA, the lobbying arm for the fixed wireless industry, released a press release that says that the NTIA has made a big mistake in excluding WISPs that use only unlicensed spectrum.

I’m not a wireless engineer, so before I wrote this blog, I consulted with several engineers and several technicians who work with rural wireless networks. The one consistent message I got from all of them is that interference can be a serious issue for WISPs deploying only unlicensed spectrum. I’m just speculating, but I have to think that was part of the reason for the NTIA decision – interference can mean that the delivered speeds are not reliably predictable.

A lot of the interference comes from the way that many WISPs operate. The biggest practical problem with unlicensed spectrum is that it is unregulated, meaning there is no agency that can force order in a chaotic wireless situation. I’ve heard numerous horror stories about some of the practices in rural areas where there are multiple WISPs.  There are WISPs that grab all of the available channels of spectrum in a market to block out competitors. WISPs complain about competitors that cheat by rigging radios to operate above the legal power limit, which swamps their competitors. And bad behavior begets bad behavior in a vicious cycle where WISPs try to outmaneuver each other for enough spectrum to operate. The reality is that the WISP market using unlicensed spectrum is a free-for-all – it’s the Wild West. Customers bear the brunt of this as customer performance varies day by day as WISPs rearrange their networks. Unless there is only a single WISP in a market, the performance of the networks using unlicensed spectrum is unreliable, almost by definition.

There are other issues that nobody, including WISPA, wants to address. There are many WISPs that provide terrible broadband because they deploy wireless technology in ways that exceed the physics of the wireless signals. Many of these same criticisms apply to cellular carriers as well, particularly with the new cellular FWA broadband. Wireless broadband can be high-quality when done well and can be almost unusable if deployed poorly.

There are a number of reasons for poor fixed wireless speeds. Some WISPs are still deploying lower quality and/or older radios that are not capable of the best speeds – this same complaint has been leveled for years against DSL providers. ISPs often pile too many customers into a radio sector and overload it, which greatly dilutes the quality of the broadband that can reach any one customer. Another common issue is WISPs that deploy networks with inadequate backhaul. They will string together multiple wireless backhaul links to the point where each wireless transmitter is starved for bandwidth. But the biggest issue that I see in real practice is that some WISPs won’t say no to customers even when the connection is poor. They will gladly install customers who live far past the reasonable range of the radios or who have restricted line-of-sight. These practices are okay if customers willingly accept the degraded broadband – but typically, customers are often given poor broadband for a full price with no explanation.

Don’t take this to mean that I am against WISPs. I was served by a WISP for a decade that did a great job. I know high-quality WISPS that don’t engage in shoddy practices and who are great ISPs. But I’ve worked in many rural counties where residents lump WISPs in with rural DSL as something they will only purchase if there is no alternative.

Unfortunately, some of these same criticisms can be leveled against some WISPs that use licensed spectrum. Having licensed spectrum doesn’t overcome issues of oversubscribed transmitters, poor backhaul, or serving customers with poor line-of-sight or out of range of the radios. I’m not a big fan of giving grant funding to WISPs who put profits above signal quality and customer performance – but I’m not sure how a grant office would know this.

I have to think that the real genesis for the NTIA’s decision is the real-life practices of WISPs that do a poor job. It’s something that is rarely talked about – but it’s something that any high-quality WISP will bend your ear about.

By contrast, it’s practically impossible to deploy a poor-quality fiber network – it either works, or it doesn’t. I have no insight into the discussions that went on behind the scenes at the NTIA, but I have to think that a big part of the NTIA’s decision was based upon the many WISPs that are already unreliable. The NTIA decision means unlicensed-spectrum WISPs aren’t eligible for grants – but they are free to compete for broadband customers. WISPs that offer a high-quality product at a good price will still be around for many years to come.

How Fast is Starlink Broadband?

We got a recent analysis of Starlink broadband speeds from Ookla, which gathers huge numbers of speed tests from across the country. The U.S. average download speeds on Starlink have improved over the last year, from an average of 65.72 Mbps in 1Q 2021 to 90.55 Mbps in 1Q 2022. But during that same timeframe, upload speeds got worse, dropping from an average of 16.29 Mbps in 1Q 2021 to 10.70 Mbps in 1Q 2022.

It’s likely that some of this change is intentional since ISPs have a choice for the amount of bandwidth to allocate to download versus upload. It seems likely that overall bandwidth capacity and speeds are increasing due to the continually growing size of the Starlink satellite constellation – now over 2,500. Starlink subscriptions are climbing quickly. The company reported having 145,000 customers at the start of the year and recently announced it is up to 400,000 customers worldwide. This fast growth makes me wonder when Starlink will stop calling the business a beta test.

These speed tests raise a few interesting questions. The first is if these speeds are good enough to qualify Starlink to be awarded the RDOF awards that have now been pending from the FCC for over a year and a half. While these speeds are now approaching the 100 Mbps speed promised by Starlink in its RDOF bids, it’s worth noting that the 90 Mbps number is an average. There are some customers seeing speeds of over 150 Mbps while others are seeing only 50 Mbps or even less. I’ve talked to a number of Starlink customers and what they’ve told me is that Starlink needs a view of the ‘whole sky’ from horizon to horizon to operate optimally, and many homes don’t have the needed view. This doesn’t bode well for the Starlink RDOF awards areas of heavy woods and hills like the awards in western North Carolina.

There is a lot of speculation that Starlink is limiting the number of subscribers in a given geographic area in order to not dilute speed and performance. The RDOF awards require any winning ISP to serve everybody, and there is still a big question about the kinds of speeds that can be delivered for a geographic area that has a lot of subscribers.

The BEAD grant rules also open the door for Starlink and other satellite providers to some extent. While satellite technology is not deemed reliable enough to directly be used for grant awards, the NTIA has also opened the door to using alternate technologies like satellite and fixed wireless using unlicensed spectrum in areas where landline technologies are too costly. Each state will have to decide if grants can be awarded for satellite broadband in such cases, and it seems likely that some states will allow this.

The Ookla article also shows the Starlink average speeds around the globe. Some of the average speeds are much faster than U.S. speeds, and this might be due to smaller countries that cover a smaller and less diverse terrain than the U.S. Here, speeds are likely much higher in the open plains states than for customers located in hills, mountains, and woods. There can’t be a technology difference since the same satellites serve around the globe.

There is an interesting app that shows the location of the Starlink satellites. It’s fascinating to watch how they circle the globe. What is most striking about the world map is how few satellites there are over the U.S. at any given time. The app shows a few closely packed strings of satellites that are recent launches that haven’t yet been deployed to their final orbits.

The skies are going to soon get a lot busier. The original business plan for Starlink was to deploy 11,000 satellites. Jeff Bezos and Project Kuiper have FCC permission to deploy satellites, with launches starting this year. OneWeb, which is now aiming to serve business and government customers, has much of its constellation launched but has yet to begin delivering services. Telesat is still marching slowly forward and has fallen behind due to supply chain issues and funding concerns – but still has plans to have a fleet in place in the next few years. I would imagine that in a few years, we’ll see Ookla reports comparing the different constellations.

The NTIA Preference for Fiber

As might be expected when there is $42.5 billion in grant funds available, we are probably not done with the rules for the BEAD grants. There are several areas where heavy lobbying is occurring to change some of the rules established by the NTIA in the NOFO for the grants.

One of the areas with the most lobbying is coming from WISPs that are complaining that the NTIA has exceeded its statutory authority by declaring a strong preference for fiber. The NTIA went so far as to declare that fixed wireless technology that doesn’t use licensed spectrum is not a reliable source of broadband and isn’t eligible for BEAD grants. The wireless industry says that the NTIA is out of bounds and not sticking to a mandate to be technology neutral.

I decided to go back to the Infrastructure Investment and Jobs legislation and compare it with the NOFO to see if that is true. Let’s start with the enabling language in the legislation. The IIJA legislation makes it clear that the NTIA must determine the technologies that are eligible for the BEAD grants. One of the criteria the NTIA is instructed to use is that grant-funded technologies must be deemed to be reliable. Reliable is defined in the Act using factors other than speed and specifically says that the term “reliable broadband service’ means broadband service that meets performance criteria for service availability, adaptability to changing end-user requirements, length of serviceable life, or other criteria, other than upload and download speeds.

I interpret ‘adaptability to end-user requirements’ to mean that a grant-eligible technology must have some degree of what the industry has been calling being future-proofed. A grant-funded technology must be able to meet future broadband needs and not just the needs of today.

‘Length of serviceable life’ refers to how long a grant investment might be expected to last. Historically, broadband electronics of all types typically don’t have a useful life of much more than a decade. Electronics that sit outside in the elements have an even shorter expected life, with components like outdoor receivers for wireless not usually lasting more than seven years. The broadband assets with the longest useful lives are fiber, huts, and new wireless towers. If you weigh together the average life of all of the components in a broadband network, the average useful life of a fiber network will be several times higher than the useful life of a wireless network.

NTIA then used the reliable service criteria to classify only four technologies as delivering a reliable signal – fiber, cable modem hybrid fiber-coaxial technology, DSL over copper, and terrestrial fixed wireless using licensed spectrum. Since DSL cannot deliver the speeds required by the grants, that leaves only three technologies eligible for BEAD grants.

The legislation allows the NTIA to consider other factors. It appears that one of the other factors the NTIA chose is the likelihood that a strong broadband signal will reach a customer. I speculate that fixed wireless using only unlicensed spectrum was eliminated because interference of unlicensed spectrum can degrade the signal to customers. It’s a little harder to understand which factors were used to eliminate satellite broadband. The high-orbit satellites are eliminated by not being able to meet the 100-millisecond requirement for latency established by the legislation. I would speculate that low-orbit satellites are not eligible for grants because the average life of a given satellite is being touted as being about seven years – but I’m sure there are other reasons, such as not yet having any proof of the speeds that can be delivered when a satellite network fills with customers.

From the short list of technologies deemed to be reliable, the NTIA has gone on to say several times in the NOFO that there is a preference for fiber. When looking at the factors defined by the legislation, fiber is the most future-proofed because speeds can be increased drastically by upgrading electronics. Fiber also has a much longer expected useful life than wireless technology.

The accusations against the NTIA seem to be implying that the NTIA had a preference for fiber even before being handed the BEAD grants. But in the end, the NTIA’s preference for fiber comes from ranking the eligible technologies in terms of how the technologies meet the criteria of the legislation. It’s worth noting that there are other parts of the NOFO that do not promote fiber. For example, state broadband offices are encouraged to consider other alternatives when the cost of construction is too high. I think it’s important to note that any NTIA preference for fiber does not restrict a state from awarding substantial awards to fixed wireless technology using licensed spectrum – that’s going to be a call to make by each state.

There is a lot of lobbying going on the expand the NTIA’s list to include fixed wireless using unlicensed spectrum and satellite broadband. I’ve even heard of rumors of lawsuits to force the expansion of the available technologies. That’s the primary reason I wrote this blog – as a warning that lobbying and/or lawsuits might delay the BEAD grants. I think the NTIA has done what the legislation required, but obviously, anybody who is being excluded from the grants has nothing to lose by trying to get reinstated in the grants. When there is this much money at stake, I don’t expect those who don’t like the NTIA rules to go away quietly.

Smart Highways or Smart Cars?

It wasn’t too many years ago when you couldn’t read an article about broadband infrastructure without hearing about the need for smart highway infrastructure that was going to enable self-driving cars. There were various versions of how this would happen, but the predominant concept was that 5G networks along roads would communicate with cars and would enable efficient and safe travel by eliminating driver error by taking the driver out of the equation. This was one of the primary business cases for 5G promoted by the cellular carriers.

The idea went quiet for a variety of reasons, and I thought this idea was dead. I was surprised to recently hear about a $130 million project in Michigan to create a trial project for smart roads. This project will be for a 25-mile stretch of I-94 between Ann Arbor and Detroit. The project is described as creating the world’s most advanced road network for connected and automated vehicles.

There are a number of reasons that the industry has migrated almost all research into developing smart cars rather than smart roads.

First is the basic discussion of whether computing brains should be provided by centralized infrastructure or moved to the edge -in the self-driving car arena the edge is the smart car. Most of the effort by car manufacturers has been to make cars smarter. Google and other pioneers in the field decided that cars needed to be able to deal with all driving conditions and all roads rather than relying somehow on smart roads. This approach has taken a lot longer than first predicted, but cars are getting better at this every year as manufacturers introduce new features.

The best arguments against the smart road are practical. First is the classic chicken and egg issue. Do we really need to wait until there are smart roads everywhere (or at least in a large percentage of places) before the self-driving car industry makes any economic sense? Let’s say this corridor in the project works as promised. What benefits or incentives does this one stretch of road provide until there are hundreds of times more smart roads with the same features? Are any car manufacturers going to develop features that rely on smart roads until there are enough smart roads for this to make sense? Will people be willing to pay more for a car with the smart road features if it can only be used on limited stretches of smart roads?

And then there is the cost issue. This project costs over $5 million per mile of roadway. I assume that construction costs will drop if the technology is expanded. Let’s assume this might cost $4 million per mile. There are over 430,000 miles of major roads in the country, including interstate highways and other major divided arterial highways. It would cost over $1.7 trillion to bring this technology to just those major highways. Who is going to pay for that? There are 276 million cars in the country, and that equates to almost $6,300 per vehicle. I’ve thought about this several times over the last decade, and I can’t envision this being a priority compared to how $1.7 billion could be used elsewhere. Worse yet, that huge cost only brings the solution to major highways. Is that enough miles for this to be worth it? How do driverless trucks navigate when they have to get off the major highways? There are over 2 additional million miles of paved roadways in the country.

But just suppose the country decides that smart roads are an important national priority, and we spend the trillions to get a smart highway system that revolutionizes product delivery nationwide. There are some obvious advantages to that vision of having driverless vehicles shuttling goods across the country 24/7, eliminating truckers. What about the cost of maintaining this smart road network? This network will consist of sensors along and embedded in roads and 5G and other wireless technologies to communicate with vehicles. We know that many of these technologies only have an effective life of around ten years. That means a perpetual huge annual budget to keep the electronics network upgraded and functioning. There also would be a lot of people needed to keep the smart roads operating properly. I can see why companies like Cavnue and the cellular carriers love the idea of the smart road – it’s a perpetual, guaranteed revenue stream for them. But I ask again about who would be willing to pay for this huge ongoing expense?

When I first heard about smart roads my first thought was to ask what happens when the smart road networks break down, as they inevitably will do. What happens on a smart road during the time when the road isn’t smart? Does traffic stop or slow to a crawl? I really thought this was an idea that had been put to bed. But I guess the lure of making money from building, operating, and updating a new kind of infrastructure is just too lucrative to let die – even if it is impractical.

The project is being described as a public-private partnership, and I assume that means that public grants are helping to fund this, perhaps out of ARPA money. I’ve been doing a lot of work in Michigan, and I know how far $130 million would go to bring broadband to unserved homes – probably including some near this smart road. I am never opposed to projects that push innovation, and if this is intended as a test bed to explore a lot of new ideas I would think this is a worthwhile idea. But Cavnue is touting this as the first of many such projects that will be replicated across Michigan and the rest of the country. I honestly don’t get it and I invite anybody to tell me why this is a good idea.

 

Getting Ready for the Metaverse

In a recent article in LightReading, Mike Dano quotes Dan Rampton of Meta as saying that the immersive metaverse experience is going to require a customer latency between 10 and 20 milliseconds.

The quote came from a presentation at the Wireless Infrastructure Association Connect (WIAC) trade show. Dano says the presentation there was aimed at big players like American Tower and DigitalBridge, which are investing heavily in major data centers. Meta believes we need a lot more data centers closer to users to speed up the Internet and reduce latency.

Let me put the 10 – 20 millisecond latency into context. Latency in this case would be the total delay of signal between a user and the data center that is controlling the metaverse experience. Meta is talking about the network that will be needed to support full telepresence where the people connecting virtually can feel like they are together in real time. That virtual connection might be somebody having a virtual chat with their grandmother or a dozen people gaming.

The latency experienced by anybody connected to the Internet is the accumulation of a number of small delays.

  • Transmission delay is the time required to get packets from a customer to be ready to route to the Internet. This is the latency that starts at the customer’s house and traverses the local ISP network. This delay is caused to some degree by the quality of the routers at the home – but the biggest factor in transmission delay is related to the technology being used. I polled several clients who tell me the latency inside their fiber network typically ranges between 4 and 8 milliseconds. Some wireless technologies also have low latency as long as there aren’t multiple hops between a customer and the core. Cable HFC systems are slower and can approach the 20 ms limit, and older technologies like DSL have much larger latencies. Satellite latencies, even the low-orbit networks, will not be fast enough to meet the 20 ms goal established by Meta due to the signal having to travel from the ground to a satellite and back to the Internet interface.
  • Processing delay is the time required by the originating ISPs to decide where a packet is to be sent. ISPs have to sort between all of the packets received from users and route each appropriately.
  • Propagation delay is due to the distance a signal travels outside of the local network. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Baltimore and Washington DC.
  • Queuing delays are the time required at the terminating end of the transmission. Since a metaverse connection is almost certainly going to be hosted at a data center, this is the time it takes to receive and appropriately route the signal to the right place in the data center.

It’s easy to talk about the metaverse as if it’s some far future technology. But companies are currently investing tens of billions of dollars to develop the technology. The metaverse will be the next technology that will force ISPs to improve networks. Netflix and streaming video had a huge impact on cable and telephone company ISPs, which were not prepared to have multiple customers streaming video at the same time. Working and schooling from home exposed the weakness of the upload links in cable company, fixed wireless, and DSL networks. The metaverse will push ISPs again.

Meta’s warning is that ISPs will need to have an efficient network if they want their customers to participate in the metaverse. Packets need to get out the door quickly. Networks that are overloaded at some times of the day will cause enough delay to make a metaverse connection unworkable. Too much jitter will mean resending missed packets, which adds significantly to the delay. Networks with low latency like fiber will be preferred. Large data centers that are closer to users can shave time off the latency. Customers are going to figure this out quickly and migrate to ISPs that can support a metaverse connection (or complain loudly about ISPs that can’t). It will be curious to see if ISPs will heed the warnings coming from companies like Meta or if they will wait until the world comes crashing down on their heads (which has been the historical approach to traffic management).

Fusion Energy on the Horizon?

This blog isn’t broadband-related, but it’s something that I find intriguing. Fusion energy has been touted as being about thirty years away since I was in college almost fifty years ago. As recently as ten years ago that was still the prediction. There have been huge amounts of investigation and progress during that time, but each new finding uncovered new challenges. The biggest issue has been finding a way to safely contain a ball of plasma that is as hot as the center of the sun. The approach over the years was to develop extremely powerful magnets that could suspend and hold the plasma.

But it looks like we finally found the breakthrough. Helion, a start-up in Everett, Washington, along with a few other companies, looks to finally be on the path of building and selling a workable fusion reactor. The company is currently building and plans to market its seventh-generation reactor which should be completed in 2024.

One of the unique aspects of the Helion approach is that it is not trying to sustain a ball of plasma – that’s where the big fusion reactors have struggled. Helion instead creates short, repetitive bursts of plasma. The company’s sixth-generation reactor was built in 2020 and has been generating a high-energy pulse every ten seconds since then while achieving a temperature of over 100 million degrees Celsius with each burst. The company has been able to repeatedly sustain plasma, like in the center of the sun, for longer than 1-millisecond per burst. The goal of the next-generation machine will be to generate a pulse every second.

Helion has also taken a different approach than other fusion attempts in the generation of electricity. The typical approach has been to use the heat generated by the fusion plasma to create steam to drive turbines. Helion is instead using the electromagnetic pulses to take advantage of the electromagnetic waves released during the creation of the plasma, taking advantage of Faraday’s Law of Induction. Helion has created a magnetic field around the fusion reactor that interacts with the energy that is released when deuterium and helium-3 ions are smashed together. Helion says this is resulting in a 95% energy efficiency compared to 70% for the more traditional approach.

The seventh-generation fusion reactor will be about the size of a commercial shipping container and will produce about 50 megawatts of clean energy. That’s enough power for 40,000 homes. Helion believes it will be able to generate electricity for about $10 per megawatt, which is about a third of the cost of coal-fired or solar power generation.

Perhaps the best feature of the fusion reactor is that it creates no serious waste. There are two radioactive isotopes created by the reaction. The first is tritium, which has a half-life of twelve years, and that is big demand for use in wristwatches and highway exit signs. The outer output is helium-3, which is needed to produce the fusion reaction – the fusion generator creates its own fuel. Helium-3 is rare and could also provide the basis for spaceship propulsion systems that might let us travel between stars. Approximately 25 tons of helium-3 could generate all of the electricity used by the country in a year – but the whole U.S. supply of the helium-3 today is only about 20 kilograms.

The end-product of widespread fusion generators would be the creation of endless clean energy. With fusion power, we’d still need electric grids. However, as unlimited power can be produced locally, this technology would eventually eliminate the energy-wasteful high-power transmission systems used today to connect regions of the electric grid together.

The first customers of the technology are likely to be power-hungry data centers. Data centers are most often built in the parts of the country with the most affordable electricity, but fusion power would mean we could put data centers close to the places where data is most used.

Update on DOCSIS 4.0

LightReading recently reported on a showcase at CableLabs where Charter and Comcast demonstrated the companies’ progress in testing the concepts behind DOCSIS 4.0. This is the big cable upgrade that will allow the cable companies to deploy fast upload speeds – the one area where they have a major disadvantage compared to fiber.

Both companies demonstrated hardware and software that could deliver a lot of speed. But the demos also showed that the cable industry is probably still four to five years away from having a commercially viable product that cable companies can use to upgrade networks. That’s a long time to wait to get better upload speeds.

Charter’s demonstration was able to use frequencies within the coaxial cables up to 1.8 GHz. That’s a big leap up from today’s maximum frequency utilization of 1.2 GHz. As a reminder, a cable network operates as a giant radio system that is captive inside of the coaxial copper wires. Increasing the range of spectrums used means opening up a big range of additional bandwidth capacity inside of the transmission. These new breakthroughs are akin to the creation of G.Fast which harnesses higher frequencies inside the telephone copper wires. While engineers can theoretically guess how the higher frequencies will behave, the reason for these early tests is to find all of the unexpected quirks of how the various frequencies interact inside of the coaxial network in real-life conditions. A coaxial cable is not a sealed environment and allows interference from the outside world that can interfere unexpectedly with parts of the transmission path.

Charter used equipment supplied by Vicma for the node, Teleste for amplifiers, and ATX Networks for taps. The node is the electronics that sit in a neighborhood and converts the signal from fiber onto the coaxial network. Amplifiers are needed because the signals in a coaxial system don’t travel very far without having to be amplified and refreshed. Taps are the devices that peel signals from the coaxial distribution network to feed into homes. A cable company will have to replace all of these components, plus install new modems, to upgrade to a higher frequency network – which means the DOCSIS 4.0 upgrade will be expensive.

One of the impressive changes from the Charter demo was that the company said it could overlay the new DOCSIS system over top of an existing cable network without respacing. That’s a big deal because respacing would mean moving existing channels to make room for the new bandwidth allocation.

Charter was able to achieve a download speed of 8.9 Gbps download and 6.2 Gbps upload. They feel confident they will be able to get this over 10 Gbps. Comcast achieved speeds on its test of 8.2 Gbps download and 5.1 Gbps upload. In addition to researching DOCSIS 4.0, Comcast is also looking for ways to use the new technology to beef up existing DOCSIS 3.1 networks to provide faster upload speeds earlier.

Both companies face a market dilemma. They are both under pressure to provide faster upload speeds today. If they don’t find ways to do that soon, they will lose customers to fiber overbuilders and even the FWA wireless ISPs. It’s going to be devastating news for cable stock prices in the first quarter after Charter or Comcast loses broadband customers – but the current market trajectory shows that’s likely to happen.

Both companies are still working on lab demos and are using a breadboard chip designed specifically for this test. The normal lab development process means fiddling with the chip and trying new versions until the scientists are satisfied. That process always takes a lot longer than executives want but is necessary to roll out a product that works right. But I have to wonder if cable executives are in a big hurry to make an expensive upgrade to DOCSIS 4.0 so soon after upgrading to DOCSIS 3.1.

7G – Really?

I thought I’d check in on the progress that laboratories have made in considering 6G networks. The discussion on what will replace 5G kicked off with a worldwide meeting hosted in 2019 at the University of Oulu, in Levi, Lapland, Finland.

6G technology will explore the frequencies between 100 GHz and 1 THz. This is the frequency range that lies between radio waves and infrared light. These spectrums could support unimaginable wireless data transmission rates of up to one terabyte per second – with the tradeoff that such transmissions will only be effective for extremely short distances.

Scientists have already said 5G will be inadequate for some computing and communication needs. There is definitely a case to be made for applications that need huge amounts of data in real-time. For example, a 5G wireless signal at a few gigabits per second is not able to transmit enough data to support complex real-time manufacturing processes. There is not enough data being transmitted with a 5G network to support things like realistic 3D holograms and the future metaverse.

Scientists at the University of Oulu say they are hoping to have a lab demonstration of the ability to harness the higher spectrum bands by 2026, and they expect the world will start gelling on 6G standards around 2028. That all sounds reasonable and is in line with what they announced in 2019. One of the scientists at the University was quoted earlier this year saying that he hoped that 6G wouldn’t get overhyped as happened with both 4G and 5G.

I think it’s too late for that. You don’t need to do anything more than search for 6G on Google to find a different story – you’ll have to wade through a bunch of articles declaring we’ll have commercial 6G by 2030 before you even find any real information from those engaged in 6G research. There is even an online 6G magazine with news about everything 6G. These folks are already hyping that there will be a worldwide scramble as governments fight to be the first ones to master and integrate 6G – an upcoming 6G race.

I just shake my head when I see this – but it is nothing new. It seems every new technology these days spawns an industry of supposed gurus and prognosticators who try to monetize the potential for each new technology. The first technology I recall seeing this happen with was municipal WiFi in the 1990s. There were expensive seminars and even a paper monthly magazine touting the technology – which, by the way, barely worked and quickly fizzled. Since then, we’ve seen the guru industry pop up for every new technology like 5G, block-chain, AI, bitcoin, and now the metaverse and 6G. Most new cutting-edge technologies find their way into the economy but at a much slower pace than touted by the so-called early experts.

But before the imaginary introduction of 6G s by 2030, we will need to first integrate 5G into the world. Half of the cellphones in the world still connect using 3G. While 3G is being phased out in the U.S., it’s going to be a slower process elsewhere. While there are hundreds of Google links to articles that predict huge numbers of 5G customers this year – there aren’t any. At best, we’re currently at 4.1G or 4.2G – but the engineering reality is obviously never going to deter the marketers. We’ll probably see a fully compliant 5G cell site before the end of this decade, and it will be drastically different, and better, than what we’re calling 5G today. It’ll take another few years after that for real 5G technology to spread across U.S. urban areas. There will be a major discussion among cellular carriers about whether the 5G capabilities will make any sense in rural areas since the 5G technology is mostly aimed at solving overcrowded urban cellular networks.

Nobody is going to see a 6G cellphone in their lifetime, except perhaps as a gimmick. We’re going to need several generations of better batteries before any handheld device can process terabyte data without zapping the battery within minutes. That may not deter Verizon from showing a cellular speed test at 100 Gbps – but marketers will be marketers.

Believe it or not, there are already discussions about 7G – although nobody can define it. It seems that it will have something to do with AI and the Internet of Things. It’s a little fuzzy about how something after 6G will even be related to the evolution of cellular technology – but this won’t stop the gurus from making money off the gullible.