Upgrades for FWA Cellular Wireless

In the recent third quarter earnings call, Verizon CEO Hans Vestberg expressed strong support and belief in the future of the company’s FWA wireless broadband product. This product provides home and business broadband that uses the same cellular spectrum used today to provide bandwidth for cellphones.

There is good reason for the company to be optimistic about the broadband product. In only a few short years the company has added almost 2.7 million FWA customers, and most of its broadband customer growth in the third quarter of this year came from FWA. As noted by Vestberg, rapid growth has continued even after the company increased the price of the product by $10 per month.

As I have addressed in several blogs, there are some limitations on the current FWA product. The biggest downside is that the fast speeds advertised for FWA by Verizon and T-Mobile are only available for customers that live within a mile or so of a cell tower. Speeds seem to cut in half in the second mile from a tower and drop significantly by the third mile.

Another drawback is that both Verizon and T-Mobile throttle the bandwidth for FWA any time that cellphone usage gets heavy. In scouring through multiple speed tests, we have found customers who vary between fast and extremely slow speeds – which might be evidence of this throttling.

But Vestberg mentioned a big technology boost that will be coming to the Verizon FWA product. Verizon purchased a lot of C-Band spectrum in an FCC auction in 2021. This is spectrum that sits between 3.7 GHz and 3.98 GHz. The licensed spectrum provides Verizon with anywhere from 140 MHz to 200 MHz of cellular bandwidth in markets across the country.

Vestberg says the company is starting to upgrade busy urban towers with the extra C-Band spectrum. He implied that the upgrades will be coming to other urban towers and some suburban towers in 2024.

He said the C-Band spectrum will double or triple the cellular bandwidth depth in most markets. He said that using the new spectrum for FWA could result in speeds as fast as 900 Mbps to 2.4 Gbps. Like all speed claims made by ISPs, those speeds are likely faster than anybody will see in real life and probably represent theoretical maximums. However, FWA users can expect a big boost in speeds, particularly those living near towers.

I have to assume that Verizon has already built C-Band capabilities into its home FWA receivers, so speed upgrades ought to be realized immediately after an upgrade. A lot of the newest cell phones also already include C-Band capabilities. Verizon seems to have the most aggressive plan for C-Band, but AT&T has started to deploy the spectrum in a few markets. T-Mobile owns C-Band spectrum, but still seems to be hanging on the sidelines for upgrades.

Significant speed increases to FWA can make the product into a potent competitor to cable companies, at least for customers within a close distance of a cellular tower. The FWA prices are far lower than the prices charged by the big cable companies for broadband, and fast speeds can make this a viable alternative.

The first generation of FWA has delivered speeds in the 100-300 Mbps range. That has been fast enough to attract millions of customers. But the first generation product has felt more like a big upgrade to DSL rather than a direct threat to cable companies. But if the current speeds are really doubled or tripled, many households are going to be attracted by the lower prices on FWA. It’s an interesting product to market since the attractiveness for customers is in a direct relationship to the strength of the cellular signal that reaches their home –  an extremely local situation.

Google Moonshot Delivering Wireless Backhaul

You may recall a number of years ago when Google experimented with delivering broadband from balloons in an effort labeled Project Loon. The project was eventually dropped, but a remnant of the project has now resurfaced as Taara – broadband delivered terrestrially by lasers.

Project Loon functioned by beaming broadband from dirigible to receivers on the ground, and Taara sprung out of the idea of using those same lasers for terrestrial broadband. Taara claims to be able to beam as much as 20 gigabits for 20 kilometers (12 miles). While that is impressive, the important claim is that the hardware is affordable and easy to install and align.

The Taara effort came out of the effort by Google founders Larry Page and Sergey Brin to form a division to work on moonshots – ideas that are futuristic sounding, but that could someday make the world a radically better place. This resulted in the creation of X, the parent of Taara, which is the laboratory in charge of the moonshot ideas.

Taara sees this technology as a way to increase broadband access in areas with little or no broadband access. This is also envisioned as a technology that can provide better backhaul to cell towers and ISP hub sites. The most promising use of the technology is to bring a high-speed connection to the many small villages around the world that aren’t connected to broadband.

The X website includes several case studies of the technology. In the Congo, the radios were used to beam broadband across the Congo River to make a connection between Brazzaville and Kinshasa. This was a 4.8 kilometer radio hop that is far less expensive than building a fiber route by road of almost 400 kilometers. Within the first 20 days after the connection, the backhaul connection through Taara carried almost 700 terabytes of data.

https://x.company/blog/posts/taara-beaming-broadband-across-congo/

Taara has already been deployed in thirteen countries, and Taara is working with major players to quickly expand the use of the technology. This includes deals with the Econet Group and its subsidiary Liquid Telecom in Africa, the ISP Bluetown in India, and Digicel in the Pacific Islands. Taara is also now working with Bharti Airtel, one of the largest telecom providers in India to ramp up distribution. India has hundreds of thousands of small villages that could be candidates for the technology.

In Africa, the roll-out of the technology started in Kenya, working with Liquid Telecom and the Econet Group. The radios are perceived as the best way to build backhaul in places where it is challenging or dangerous to build fiber networks, such as across rivers, across national parks, or in post-conflict zones.

https://x.company/blog/posts/bringing-light-speed-internet-to-sub-saharan-africa/

There are still 2 billion people on the planet who are not connected to the Internet, and in most cases, one of the primary impediments to expanding Internet services is the lack of affordable and reliable backhaul. The Taara lasers seem like a solution to bring broadband to a huge number of places that have lacked connectivity.

Is Jitter the Problem?

Most people assume that when they have broadband issues they don’t have fast enough broadband speeds. But in many cases, problems are caused by high jitter and latency. Today, I’m looking at the impact of  jitter.

What is Jitter? Jitter happens when incoming data packets are delayed and don’t show up at the expected time or in the expected order. When data is transmitted over the Internet it is broken into small packets. A typical packet is approximately 1,000 bytes or 0.001 megabytes. This means a lot of packets are sent to your home computer for even basic web transactions.

Packets are created at the location originates a web signal. This might be a site that is streaming a video, sending a file, completing a voice over IP call, or letting you shop online. The packets are sent in the order that the original data stream is encoded. Each packet takes a separate path across the Internet. Some packets arrive quickly, while others are delayed for some reason. Measuring jitter means measuring the degree to which packets end up at your computer late or in the wrong order.

Why Does Jitter Matter? Jitter matters the most when you are receiving packets for a real-time transaction like a streaming video, a Zoom call, a voice over IP call, or a video connection with a classroom. Your home computer is going to do its best to deliver the transmissions on time, even if all the packets haven’t arrived. You’ll notice missing packets of data as pixelation or fuzziness in a video, or as poor sound quality on a voice call. If enough packets are late, you might drop a VoIP call or get kicked out of a Zoom session.

Jitter doesn’t matter as much for other kinds of data. Most people are not concerned if it takes slightly longer to download a data file or to receive an email. These transactions don’t show up as received on your computer until all (or mostly all) of the packets have been received.

What Causes Jitter? The primary cause of jitter is network congestion. This happens when places in the network between the sender and the receiver are sent more data packets than can be processed in real time.

Bandwidth constraints can occur anywhere in a network where there is a possibility of overloading the capacity of the electronics. The industry uses the word chokepoint to describe any place where data can be restricted. On an incoming data transmission, an ISP might not have enough bandwidth on the incoming backbone connection. Every piece of ISP network gear that routes traffic within an ISP network is a potential chokepoint – a common chokepoint is where data is handed off to a neighborhood. The final chokepoint is at the home if data is coming in faster than the home broadband connection can handle it.

A common cause of overloaded chokepoints is old or inadequate hardware. An ISP might have outdated or too-small switches in the network. The most common chokepoints at homes are outdated WiFi modems or older computers that can’t handle the volume of incoming data.

One of the biggest problems with network chokepoints is that any time that an electronics chokepoint gets too busy, packets can be dropped or lost. When that happens, your home computer or your ISP will request the missing packets be sent again. The higher the jitter, the more packets that are lost and must be sent multiple times, and the greater the total amount of data being sent through the network. With older and slower technologies like DSL, the network can get paralyzed if failed packets accumulate to the point of overwhelming the technology.

Contrary to popular belief, faster speeds don’t reduce jitter, and can actually increase it. If you have an old inadequate WiFi modem and upgrade to a faster technology like fiber, the WiFi model will be even more overwhelmed than it was with a slower bandwidth technology. The best solution to lowering jitter is for ISPs and customers to replace equipment that causes chokepoints. Fiber technology isn’t better just because it’s faster – it also includes technology that move packets quickly through chokepoints.

What Happened to Quantum Networks?

A few years ago, there were a lot of predictions that we’d see broadband networks converting to quantum technology because of the enhanced security. As happens with many new technologies, quantum computing is advancing at a slower pace than the wild predictions that accompanied the launch of the new technology.

What are quantum computing and quantum networks? The computers we use today are all Turing machines that convert data into bits represented by either a 1 or a 0 and then process data linearly through algorithms. Quantum computing takes advantage of a property found in subatomic particles called superposition, meaning that particles can operate simultaneously in more than one state, such as an electron that is at two different levels. Quantum computing mimics this subatomic world by creating what are called qubits, which can exist as both a 1 and a 0 at the same time. One cubit can perform two calculations at once, but when many cubits are used simultaneously, the number of simultaneous calculations grows exponentially. A four-cubit computer can perform 24 or 16 calculations at the same time. Some quantum computers are currently capable of 1,000 cubits, or 21000 simultaneous calculations.

We are starting to see quantum computing in the telecom space. In 2020, Verizon conducted a network trial using quantum key distribution technology (QKD). This uses a method of encryption that might be unhackable. Photons are sent one at a time alongside an encrypted fiber optic transmission. If anybody attempts to intercept or listen to the encrypted light stream, the polarization of the photons is impacted, and the sender and receiver of the message both know instantly that the transmission is no longer safe. The theory is that this will stop hackers before they can learn enough to crack into and analyze a data stream. Verizon also added a second layer of security using a quantum random number generator that updates the encryption key randomly in a way that can’t be predicted.

A few months ago, EPB, the municipal fiber provider in Chattanooga, announced a partnership with Qubitekk to let customers on the City’s fiber network connect to a quantum computer. The City is hoping to attract companies to the City that want to benefit from quantum computing. The City has already heard from Fortune 500 companies, startups, and government agencies that are interested in using the quantum computer links.

EBP has established the quantum network separate from its last-mile network to accommodate the special needs of a quantum network transmission. The quantum network uses more than 200 existing dark fibers to establish customer links on the quantum network. EPB engineers will constantly monitor the entangled particles on the quantum network.

Quantum computing is most useful for applications that require large numbers of rapid calculations. For example, quantum computing could produce faster and more detailed weather maps in real time. Quantum computing is being used in research on drugs or exotic materials where scientists can compare multiple complex molecular structures easily. One of the most interesting current uses is that quantum computing can greatly speed up the processing power of artificial intelligence that is now sweeping the world.

It doesn’t look like quantum networking is coming to most fiber networks any time soon. The biggest holdup is the creation of efficient and cost-effective quantum computers. Today, most of these computers are in labs at universities or government facilities. The potential for quantum computing is so large that the technology could explode onto the scene when the hardware issue is solved.

The Wireless Innovation Fund

Practically everybody in the country has a cellphone, and mobile communication is now a huge part of the daily life of people and key to a huge amount of the economy. But as we found out during the pandemic, key parts of the economy, like the cellphone market, are susceptible to supply chain issues. The U.S. cellphone industry is particularly susceptible to market forces since the industry is dominated by a small number of manufacturers.

One of the many programs funded by recent legislation is the Public Wireless Supply Chain Innovation Plan that was funded by the CHIPS and Science Act of 2022. This program is being implemented with $1.5 billion to award for grants that explore ways to support open and interoperable 5G wireless networks.

The specific goals of the grant fund are to provide grants that will:

  • Accelerate commercial deployment of open, interoperable equipment;
  • Promote compatibility of new 5G equipment;
  • Allow the integration of multiple vendors into the wireless network environments;
  • Identify the criteria needed to define equipment as compliant with open standards;
  • Promote and deploy security features and network function virtualization for multiple vendors, interoperable networks.

All of this equates to opening the cellular network to multiple new U.S. vendors. That will make the cellular networks far less susceptible to foreign supply chain problems while also creating new U.S. jobs. There is also the additional goal of increasing the security of our wireless networks. This is all being done in conjunction with the other provisions of the CHIPS Act, that have already resulted in over fifty projects to build chips in the U.S.

There have already been 127 applications for grants from the fund that total to $1.39 billion. There have been three grants announced, with many more to come. The first three grants are:

Northeastern University for $1.99 million to develop an accurate testing platform to enable the construction of sustainable and energy-efficient wireless networks.

New York University for $2 million to develop testing and evaluation procedures for open and secure adaptive spectrum sharing for 5G and beyond.

DeepSig Inc. for $1.49 million to dramatically improve the fidelity, speed, and repeatability of OpenRAN air-interface performance testing using an AI model to set new standards and tools to revolutionize the evaluation of interoperable ORAN in real world conditions.

I’ve always believed that the government should take the lead on directed research of this type. I’m sure some of the ideas being funded won’t pan out, but the point of directed research is to uncover ideas that make it into the next generation of deployed technology. I’d love to see something similar done for ISP technologies. I hope this is not a one-time grant program because funding this kind of research every year is one of the best ways to keep the U.S. at the forefront of both wireless and broadband technology – using American technology.

DOCSIS 4.0 vs. Fiber

Comcast and Charter previously announced that they intend to upgrade cable networks to DOCSIS 4.0 to be able to better compete against fiber networks. The goal is to be able to offer faster download speeds and drastically improve upload speeds to level the playing field with fiber in terms of advertised speeds. It’s anybody’s guess if these upgrades will make cable broadband equivalent to fiber in consumers’ eyes.

From a marketing perspective, there are plenty of people who see no difference between symmetrical gigabit broadband offered by a cable company or a fiber overbuilder. However, a lot of the public has already become convinced that fiber is superior. AT&T and a few other big telcos say they quickly get a 30% market share when they bring fiber to a neighborhood, and telcos claim aspirations of reaching a 50% market share within 3-4 years.

At least a few big cable companies believe fiber is better. Cox is in the process of overbuilding fiber in some of its largest markets. Altice has built fiber in about a third of its markets. What’s not talked about much is that cable companies have the same ability to overlash fiber on existing coaxial cables in the same way that telcos can overlash onto copper cables. It costs Cox a lot less to bring fiber to a neighborhood than a fiber overbuilder that can’t overlash onto existing wires.

From a technical perspective, engineers and broadband purists will tell you that fiber delivers a better broadband signal. A few years back, I witnessed a side-by-side comparison of fiber and coaxial broadband delivered by ISPs. Although the subscribed download speeds being delivered were the same, the fiber connection felt cleaner and faster to the eye. There are several technical reasons for the difference.

  • The fiber signal has far less latency. Latency is a delay in getting bits delivered on a broadband signal. Higher latency means that a smaller percentage of bits get delivered on the first attempt. The impact of latency is most noticeable when viewing live sporting events where the signal is sent to be viewed without having received all of the transmitted bits – and this is seen to the eye as pixelation or less clarity of picture.
  • Fiber also has much less jitter. This is the variability of the signal from second to second. A fiber system generally delivers broadband signals on time, while the nuances of a copper network cause minor delay and glitches. As one example, a coaxial copper network acts like a giant radio antenna and as such, picks up stray signals that enter the network and can disrupt the broadband signal. Disruptions inside a fiber network are comparatively minor and usually come from small flaws in the fiber caused during installation or later damage.

The real question that will have to be answered in the marketplace is if cable companies can reverse years of public perception that fiber is better. They have their work cut out for them. Fiber overbuilders today tell me that they rarely lose a customer who returns to the cable company competitor. Even if the cable networks get much better, people are going to remember when they used to struggle on cable holding a zoom call.

Before the cable companies can make the upgrade to DOCSIS 4.0, which is still a few years away, the big cable companies are planning to upgrade upload speeds in some markets using a technology referred to as a mid-split. This will allocate more broadband to the upload path. It will be interesting to see if that is enough of an upgrade to stop people from leaving for fiber. I think cable companies are scared of seeing a mass migration to fiber in some neighborhoods because they understand how hard it will be to win people back. Faster upload speeds may fix the primary issue that people don’t like about cable broadband, but will it be enough to compete with fiber? It’s going to be an interesting marketing battle.

New Battery Technology

The world is growing increasingly dependent on good batteries. It’s clear that using the new 5G spectrum drains cellphone batteries faster. Everybody has heard horror stories of lithium batteries from lawnmowers or weed eaters catching fire. Flying with lithium batters is a growing challenge. People with electric cars want better range without having to recharge. The best way to capture and use alternate forms of power is to store electricity in big batteries. The increasing demand for batteries is happening at the same time that trade wars for the raw materials used for batteries are heating up through tariffs and trade restrictions.

Luckily there is a huge amount of research underway to look for batteries that last longer, charge faster, and are made from more readily available minerals.

Zinc-manganese oxide batteries. Researchers at the Department of Energy’s Northwest National Laboratory have developed a technology that can produce high-energy density batteries out of zinc and magnesium. These are readily available minerals that could be used to create low-cost storage batteries.

Scientists have experimented with Zinc-manganese batteries since the 1990s, but they could never find a way to allow batteries to be recharged more than a few times due to the deterioration of the manganese electrode. They have found a technique that reduces and even replenishes the electrode and have created batteries that can be recharged over 5,000 times. This technology creates the larger batteries used for electric storage in solar systems, vehicles, and power plants.

Organosilicon Electrolyte Batteries. Scientists at the University of Wisconsin were searching for an alternative to lithium batteries to avoid the danger of the electrolyte catching fire. Professors Robert Hamers and Robert West developed an organosilicon electrolyte material that can greatly reduce the possibility of fires when added to current Li-ion batteries. The electrolytes also add significantly to battery life.

Gold Nanowire Gel Electrolyte Batteries. Scientists at the University of California, Irvine, have been experimenting with gels as the main filler in batteries since gets are generally not as combustible as liquids. They had also been experimenting with using nanowires as the diode, but the tiny wires were too delicate and quickly wore out. They recently found that they could use gold nanowires covered with dioxide along with an electrolyte gel. This combination has resulted in a battery that can be recharged 200,000 times, compared to 6,000 times for most good batteries.

TankTwo String Cell Batteries.  One of the biggest problems with batteries is the length of time it takes to recharge. The company TankTwo has developed a technique to build batteries in tiny modular compartments. These are tiny cells with a plastic coating and a conductive outer coating that can self-arrange within the battery. At an electric car charging station, the tiny cells would be sucked out from the battery housing and replaced with fully charged cells – reducing the recharging process to only minutes. The charging station can recharge deleted cells at times when electricity is the cheapest.

NanoBolt Lithium Tungsten Batteries. Researchers at N1 Technologies have developed a battery structure that allows for greater energy storage and faster recharging. They have added tungsten and carbon nanotubes into lithium batteries that bond to a copper anode substrate to build up a web-like structure. This web forms a much greater surface area for charging and discharging electricity.

Toyota Solid-state Batteries. Toyota recently announced it is introducing a new solid-state lithium-iron-phosphate battery as a replacement for the lithium-ion batteries currently used for its electric vehicles. These batters are lighter, cost less, and recharge faster. Toyota claims a range of 621 miles per charge. They say the battery can be fully recharged in ten minutes. By comparison, the best Tesla battery is good for about half the distance and can take a half-charge in fifteen minutes.

Getting the Lead Out

There was a recent article in the Wall Street Journal that talks about the possible contamination from copper telephone cables that have outer lead sheathing. I’m not linking to the article because it is behind a paywall, but this is not a new topic, and it’s been written about periodically for decades.

The authors looked at locations around the country where lead cables are still present around bus stops, schools, and parks. The article points out that there are still lead cables hanging on poles, crossing bridges, buried beneath rights-of-ways, and underwater.

Let’s start with a little history. Telephone cables with lead outer sheathing were produced and widely used starting in 1888. This was before we understood the dangers of lead in the environment, and lead was also widely used in paint, water pipes, and other materials used in daily life. Western Electric was the manufacturer of telephone cables for AT&T, and from what I can find, the company stopped making lead cables in the late 1940s. Lead cables were first replaced with cables using plastic sheaths and paper insulators. Starting around 1958, the industry transitioned to cables with polyethylene insulation.

I remember when I was first in the industry in the 1970s that there was already a movement to remove and replace lead cables any time there was a network upgrade to aerial cables. Many of the small telcos I worked with slowly replaced lead cables as part of routine upgrades and maintenance. But it’s a different story for the big telcos because starting in the mid-1980s, the big telcos made a decision to stop upgrading or even maintaining copper cables – what was in place stayed in place.

Even where the big telcos like AT&T and Frontier are building fiber today on poles, they keep the old copper wires. The lowest cost way to build fiber is to lash the fiber onto existing telephone cables. In most neighborhoods, the telcos add fiber and cut the copper cables dead. But those dead copper cables will easily be expected to now stay on poles for another fifty or more years.

I’ve never heard of any telephone company that has tried to retrieve buried telephone cables at the end of economic life. The cables are cut dead and abandoned underground. The idea of digging lead cables out of the ground sounds unrealistic since doing so will invariably disturb and break water, gas, electric, and telecom lines.

I’m also not surprised that the Wall Street Journal found lead cables crossing under bodies of water for the same reasons – the cables were likely cut dead and left in place. I can’t imagine the process of retrieving abandoned underwater cables – cables are laid with the help of gravity, but it’s hard picturing getting enough leverage to pull dead cables out of the water.

Telecompetitor wrote an article that quoted an estimate by New Street Research that says it might cost $60 billion to remove lead cables. I doubt that anybody has the facts needed to estimate this cost, but it points out that it would be extremely expensive to get lead cables out of the environment. I doubt that anybody even knows the location of most abandoned buried cables. It’s likely that the old hard-copy blueprints of copper networks are long forgotten or lost. It would be particularly expensive to remove lead cables that are now being used to support fiber networks – that would mean moving the fiber cables to a new messenger support wire.

The WSJ article seems to have been the catalyst for a drop in the stock value of the big telcos. The Telecompetitor article implied that the cable replacement cost is so high that it could kill the willingness of the big telcos to participate in BEAD grants.

When the WSJ article first hit, I assumed this would make a loud noise for a few weeks and would quickly fade away, as has happened every decade since the 1960s. But in this day of social media and sensationalism, there is already talk of having the EPA take up the issue. Even if that happens, there will be huge push-back from the telcos and it will likely take many years before the remaining lead wires are removed. The public should be comforted to know that the vast majority of copper cables on poles are not covered with lead – only cables built from the 1950s or earlier. The bigger concern is probably underground and underwater cables, and those have probably already been in place for at least seventy years.

Unintended Consequences of Satellite Constellations

Astronomy & Astrophysics published a research paper recently that looked at “Unintended Electromagnetic Radiation from Starlink Satellites”. The study was done in conjunction with the Low Frequency Array (LOFAR) telescope in the Netherlands.

The LOFAR telescope is a network of over forty radio antennas spread across the Netherlands, Germany, and the rest of Europe. This array can detect extremely long radio waves from objects in space. The antennas are located purposefully in remote locations to reduce interference from other radio sources.

The study documents that about fifty of the 4,000 current Starlink satellites are emitting frequencies in the range between 150.05 and 153 MHz, which have been set aside worldwide for radio astronomy by the International Telecommunications Union. The emitted radiation from the satellites is not intentional, and the guess is that these are stray frequencies being generated by components of some of the electronics. This is a common phenomenon for electronics of all sorts, but in this case, the stray frequencies are interfering with the LOFAR network.

This interference adds to the larger ongoing concern about the unintended impact of large satellite constellations on various branches of science. We already can see that satellites mar photographs of deep space as they pass in front of cameras. The intended radiation from the satellite constellations can accumulate and interfere with other kinds of radio telescopes. There is a fear that this current radiation will interfere with the Square Kilometer Array Observatory that is being built in Australia and South Africa. This new project is being built in remote locations away from cellphones, terrestrial TV signals, and other radios. But satellite arrays will still pass within the range of these highly sensitive radio sites.

The fear of scientists is that interference will grow as the number of satellites increases. Starlink’s current plans are to grow from the current 4,000 satellites to over 12,000 satellites – and the company has approval from the FCC to launch up to 30,000 satellites. There are numerous other satellite companies around the world with plans for constellations – and space is going to get very busy over the next decade.

One of the issues that concern scientists is that there is nowhere to go for relief from these kinds of issues. There are agreements reached at the International Telecommunications Union for setting aside various bands of spectrum for scientific research. But there is no international policemen with the authority to force satellite companies into compliance.

In this case, Starlink is working with the scientists to identify and isolate the issue to hopefully eliminate the stray radiation from future satellites. If the problem gets too bad, the FCC could intercede with Starlink. But who would intercede with satellites launched by governments that don’t care about these issues?

I don’t know how many of you are stargazers. When I was a kid in the early 60s, it was a big deal to see a satellite crossing the sky. A few satellites, like Telstar, were large bright objects crossing the sky. Most of the new satellites are much smaller, but it still doesn’t take very long watching the sky to see a satellite crossing. The sky is going to be busy when there are tens of thousands of satellites passing overhead. It’s hard to think that won’t have unexpected consequences.

Getting DOCSIS 4.0 to Market

If you read the press releases or listen in on investor calls for the big cable companies over the last year, you might think that the latest cable network technology, DOCSIS 4.0, is right around the corner and will be installed soon. Cable companies have been leaving this impression to fend off competition with fiber. There are millions of new fiber passings being constructed this year where cable companies serve today, and most of the companies building fiber say that they reach at least a 30% market penetration rate within the first year after fiber reaches a neighborhood.

The reality is that it will still be a while until DOCSIS 4.0 networks make it out into neighborhoods. A recent blog from CableLabs spells this out well. This month (July 2023), CableLabs is holding the first big interoperability testing event where different manufacturers will test if their DOCSIS 4.0 equipment is interoperable with other vendors. This kind of interoperability testing is a standard step in the process of moving toward gear that is approved for manufacturing.

Per the CableLabs blog, this testing is a pre-cursor for CableLabs to be able to certify specific brands of modems. The blog describes this as the first interoperability testing event that will look to see if a cable modem can be operational when working with the latest version of DOCSIS 4.0 core equipment. This test also will check if new modems are backward compatible with earlier existing versions of DOCSIS. This is only the first of multiple interoperability tests, and later tests will go deeper into more specific functions such as interfacing with the overall network, backoffice functions, etc.

It’s normal during this kind of testing that bugs are found in the software and hardware, and it’s likely that there will still be tweaks in many of the components of the DOCSIS 4.0 network.

Only after all of the testing is done and CableLabs is happy that all components of the system are operating correctly and will work together properly can the process of certifying equipment from each vendor begin. That involves sending devices to CableLabs for extensive testing and final approval by the CableLabs Certification Board. Only then will any manufacturer put a device into mass production. Any device that doesn’t pass certification will have to be reworked, and the process started again.

It’s hard to think that it won’t be at least another year until devices start to get certified. After that will be the time needed to mass produce, distribute, and install devices. That could easily mean two years before we might see the first DOCSIS 4.0 network being installed.

With that said, this entire process has been exceedingly fast by industry standards. The DOCSIS standards was completed in early 2020. This process is far ahead of where most new technologies would be only three years after standards are completed.

The cable companies are in a huge hurry to be able to declare superfast symmetrical speeds to compete against fiber. I’m sure there has been tremendous pressure on CableLabs to speed up each step of the process. This likely meant faster than normal efforts to create breadboard chips and the components needed for equipment. For example, the normal timeline for getting a new chip designed and built can easily take 18 months. DOCSIS 4.0 chips are likely on an accelerated timeline.

Who can say how long it will take cable companies to upgrade networks to DOCSIS 4.0? They will certainly start in the markets where they think the technology makes the most market sense. It could easily take several years to make this upgrade nationwide, assuming that manufacturers will be able to keep up with the demand.