Categories
Technology

The WiFi 6 Revolution

We’re edging closer every day to seeing WiFi 6 in our homes. WiFi 6 will be bolstered by the newly approved 6 GHz frequency, and the combination of WiFi 6 and 6 GHz spectrum is going to revolutionize home broadband.

I don’t think many people understand how many of our home broadband woes are caused by current WiFi technology. WiFi has been an awesome technology that freed our homes from long category 5 wires everywhere, but WiFi has a basic flaw that became apparent when homeowners started to buy hordes of WiFi-enabled devices. WiFi routers are lousy at handling multiple requests for simultaneous service. It’s not unusual for 25% or more of the bandwidth in a home to get eaten by WiFi interference issues.

The WiFi standard was designed to give equal opportunity to any device to use a broadband network. What that means in practical use is that a WiFi router is designed to stop and start to give every broadband device in range a chance to use the available spectrum. Most of us have numerous WiFi devices in our home including computers, tablets, TVs, cellphones, and a wide range of smart home devices, toys, etc. Behind the scenes, your WiFi router pauses when you’re downloading a big file to see if your smart thermostat or smartphone wants to communicate. This pause might seem imperceptible to you and happens quickly, but during the time that the router is trying to connect to your thermostat, it’s not processing your file download.

To make matters worse, your current WiFi router also pauses for all of your neighbor’s WiFi networks and devices. Assuming your network is password-protected, these nearby devices won’t use your broadband – but they still cause your WiFi router to pause to see if there is a demand for communications.

The major flaw in WiFi is not the specification that allows all devices to use the network, but the fact that we currently try to conduct all of our WiFi communications through only a few channels. The combination of WiFi 6 and 6 GHz is going to fix a lot of the problems. The FCC approved 6 GHz frequency for WiFi use in April 2020. This quadruples the amount of bandwidth available for WiFi. More importantly, the new spectrum opens multiple new channels (adds fourteen 80 MHz channels and seven 160 MHz channels). This means homes can dedicate specific uses to a given channel – direct computers to one channel, smart TVs to another, cellphones to yet another channel. You could load all small bandwidth devices like thermostats and washing machines to a single channel – it won’t matter if it’s crowded for devices that use tiny bandwidth. Separating devices by channel will drastically reduce the interference and delays that come from multiple devices trying to use the same channel.

The introduction of WiFi 6 is going to require the introduction of devices that can use the WiFi 6 standard and that enable the 6 GHz spectrum. We’re just starting to see devices that take advantage of WiFi 6 and 6 GHz in stores.

It looks like the first readily available use of the new technology is being marketed as WiFi 6E. This application is being aimed at wireless devices. Samsung has released WiFi 6E in the Galaxy S21 Ultra phone. It’s been rumored that WiFi 6E will be in Apple’s 1Phone 13 handsets. Any phone using Qualcomm’s FastConnect 6700 or 6900 chips will be able to use the 6 GHz spectrum. That’s likely to include laptop computers in addition to cellphones.

It’s going to take a while to break the new technology into practical use. You can buy routers today that will handle WiFi 6E from Netgear and a few other vendors, meaning that you could use the new spectrum at home for smartphones and devices with a 6E chip. The advantage of doing so would be to move cellphones off of the spectrum being used for applications like gaming, where WiFi interference is a material issue. The new WiFi 6E chips will also handle bandwidth speeds greater than 1 Gbps, which might benefit a laptop but is largely lost on a smartphone. It’s going to be a while until WiFi 6 is available at work or in the public – but over a few years, it will be coming.

The home WiFi network of the future is going to look drastically different than today’s network. One of the downsides of the 6 GHz spectrum is that it doesn’t travel as well through walls as current WiFi, and most homes are going to have to migrate to meshed networks of routers. Smart homeowners will assign various devices to specific channels and I assume that router software will make this easy to do. Separating WiFi devices to different channels is going to eliminate almost all of the WiFi interference we see today. Big channels of 6 GHz spectrum will mean that devices can grab the bandwidth needed for full performance (assuming the home has good broadband from an ISP).

Categories
Technology

Is Fiber a Hundred Year Investment?

I think every client who is considering building a fiber network asks me how long the fiber is going to last. Their fear is having to spend the money at some future point to rebuild the network. Recently, my response has been that fiber is a hundred-year investment – and let me explain why I say that.

We’re now seeing fiber built in the 1980s becoming opaque or developing enough microscopic cracks that impede the flow of light. A fiber built just after 1980 is now forty years old, and the fact that some fiber routes are now showing signs of aging has people worried. But fiber cable is much improved over the last forty years and fiber purchased today is going to avoid many of the aging problems experienced by 1980s fiber. Newer glass is clearer and not likely to grow opaque. Newer glass is also a lot less susceptible to forming microcracks. The sheathing surrounding the fiber is vastly improved and helps to keep light transmissions on path.

We’ve also learned a lot about fiber construction since 1980. It turns out that a lot of the problems with older fiber are due to the stress imposed on the fiber during the construction process. Fiber used to be tugged and pulled too hard and the stress from construction created the places that are now cracking. Fiber construction methods have improved, and fiber enters service today with fewer stress points.

Unfortunately, the engineers at fiber manufacturers won’t cite a life for fiber. I imagine their lawyers are worried about future lawsuits. Manufacturers also understand that factors like poor construction methods or suffering constant fiber cuts can reduce the life of a given fiber. But off the record, I’ve had lab scientists at these companies conjecture that today’s fiber cable, if well handled, ought to be good for 75 years or more.

That still doesn’t necessarily get us to one hundred years. It’s important to understand that the cost of updating fiber is far less than the cost of building the initial fiber. The biggest cost of building fiber is labor. For buried fiber, the biggest cost is getting the conduit into the ground. There is no reason to think that conduit won’t last for far more than one hundred years. If a stretch of buried fiber goes bad, a network owner can pull a second fiber through the tube as a replacement – without having to pay again for the conduit.

For aerial fiber, the biggest cost is often the make-ready effort to prepare a route for construction, along with the cost of installing a messenger wire. To replace aerial fiber usually means using the existing messenger wire and no additional make-ready, so replacing aerial fiber is also far less expensive than building new fiber.

Economists define the economic life of any asset to be the number of years before an asset must be replaced, either due to obsolescence or due to loss of functionality. It’s easy to understand the economic life of a truck or a computer – there comes a time when it’s obvious that the asset must be replaced, and replacement means buying a new truck or computer.

But fiber is a bit of an unusual asset where the asset is not ripped out and replaced when it finally starts showing end-of-life symptoms. As described above, it’s much cheaper than the original construction costs to bring a replacement fiber to an aerial or buried fiber route. Upgrading fiber is more akin to upgrading a properly constructed building – with proper care buildings can last for a long time.

Many similar utility assets are not like this. My city is in the process today of upgrading a few major water mains that unbelievably used wooden water pipes a century ago. Upgrading the water system means laying down an entirely new water pipe to replace the old one.

It may sound a bit like a mathematical trick, but the fact that replacement of fiber doesn’t mean a 100% replacement cost means that the economic life is longer than with other assets. To use a simplified example, if fiber needs replacement every sixty years, and the cost of the replacement requires only half of the original cost, then the economic life of the fiber in this example is 120 years – it takes that long to have to spend as much as the original cost to replace the asset.

I know that people who build fiber want to know how long it’s going to last, and we just don’t know. We know if fiber in constructed properly that it’s going to last a lot longer than the 40-years we saw from 1980 fiber. We also know that in most cases that replacement doesn’t mean starting from scratch. Hopefully, those facts will give comfort that the average economic life of fiber is something greater than 100 years – we just don’t know how much longer.

Categories
Technology

Automation and Fiber

We have clearly entered the age of robots, which can be witnessed in new factories where robots excel at repetitive tasks that require precision. I read an interesting blog at Telescent that talks about using robots to perform routine tasks inside large data centers. Modern data centers are mostly rooms full of huge numbers of switches and routers, and those devices require numerous fiber connections.

The blog talks about the solvable challenges of automating the process of performing huge volumes of fiber cross-connects in data centers. Doing cross-connects with robots would allow for fiber connections to be made 24/7 as needed while improving accuracy. Anybody who has ever been in a big data center can appreciate the challenge of negotiating the maze of fibers running between devices. The Telescent blog predicts that we’ll be seeing the accelerated use of robots in data centers over the next few years as robot technology improves.

This raises the interesting question of whether we’ll ever see robots in fiber networks. As an industry, we’ve already done a good job of automating the most repetitive tasks in our telco, cable, and cellular central offices. Most carriers have automated functions like activating new customers, changing products and features, and disconnecting customers. This has been accomplished through software, and the savings for automation software are significant, as described in this article from Cisco.

But is there a future in the telecom industry for physical robot automation? I look around the industry and the most labor-intensive and repetitive processes are done while building new networks. There probably is no more meticulous and repetitive task than splicing fibers during the construction process or when fixing damaged fibers. Splicing fiber is almost the same process used in the past to splice large telephone copper cables. A technician must match the same fiber from both sheathes to create the needed end-to-end connection in the fiber. This isn’t too hard to do when splicing a 12-fiber cable but is challenging when splicing 144 or 288-count fibers in outdoor conditions. This is even more challenging when making emergency repairs on aerial fiber in a rain or snowstorm in the dark.

This is the kind of task that robots could master and perform perfectly. It’s not hard to imagine feeding both ends of fiber into a robotized box and then just waiting for the robot to make all of the needed connections and splices perfectly, regardless of the time of day or weather conditions.

I had a recent blog that talked about the shortage of experienced telecom technicians, and splicers are already one of the hardest technicians for construction companies to find. As we keep expanding fiber construction, we’re liable to find projects that get bogged down due to a lack of splicers.

I have no idea if any robot company has even thought about automating the splicing function. We are in the infancy of introducing robots into the workplace and there are hundreds of other repetitive tasks that are likely to be automated before fiber splicing. There might be other functions in the industry that can also be automated if robots get smart enough. The whole industry would emit a huge sigh of relief if robots could tackle make-ready work on poles.

Categories
Technology The Industry

Why Fiber?

As much as I’ve written about broadband and broadband technology, it struck me that I have never written a concise response to the question, “Why Fiber?”. Somebody asked me the question recently and I immediately knew I had never answered the question. If you’re going to build broadband and have a choice of technologies, why is fiber the best choice?

Future-proofed. This is a word that gets tossed around the broadband industry all of the time, to the point that most people don’t stop to think about what it means. The demand for broadband has been growing at a blistering pace. At the end of the third quarter of 2020, the average US home used 384 gigabytes of data per month. That’s up from 218 gigabytes per household per month just two years earlier. That is a mind-boggling large amount of data and most people have a hard time grasping the implications of fast growth over long periods of time. Even fiber network engineers often underestimate future demand because the growth feels unrealistic.

As a useful exercise, I invite readers to plot out that growth at a 21% pace per year – the rate that broadband has been growing since the early 1980s. The amount of bandwidth that we’re likely to use ten, twenty, and fifty years from now will dwarf today’s usage.

Fiber is the only technology that can handle the broadband demand today and for the next fifty years. You can already buy next-generation PON equipment that can deliver a symmetrical 10 Gbps data stream to a home or business. The next generation already under beta test will deliver a symmetrical 40 Gbps. The next generation after that is likely to be 80 Gbps or 100 Gbps. The only close competitor to fiber is a cable company coaxial network, and the only way to future proof those networks would be to ditch the bandwidth used for TV, which is the majority of the bandwidth on a cable network. Even if cable companies are willing to ditch TV, the copper coaxial networks are already approaching the end of economic life. While there has been talk of gigabit wireless to residents (which I’ll believe when I see it), nobody has ever talked about 10 gigabit wireless.

Fiber Has Solved the Upload Problem. Anybody working or schooling from home now needs fast and reliable upload broadband. Fiber is the only technology that solves the upload needs today. Wireless can be set to have faster uploads but doing so sacrifices download speed. Cable networks will only be able to offer symmetrical broadband with an expensive upgrade with technology that won’t be available for at least three years. The industry consensus is that cable companies will be loathe to upgrade unless forced to by competition.

Is the Easiest to Operate. Fiber networks are the easiest to operate since they transmit light instead of radio waves. Cable company and telco copper networks act like giant antennas that pick up interference. Interference from other wireless providers or from natural phenomenon is the predominant challenge of wireless technologies.

A fiber network means fewer trouble calls, fewer truck rolls, and lower labor costs. It’s far faster to troubleshoot problems in fiber networks. Fiber cables are also surprisingly strong, and fiber is often the only wire still functioning after a hurricane or ice storm.

Lower Life Cycle Costs. Fiber is clearly expensive to build, but the cost characteristics over a fifty-year time frame can make fiber the lowest-cost long-term option. Nobody knows how long fiber will last, but fiber manufactured today is far superior to fiber built a few decades ago. When fiber is installed carefully and treated well it might well last for most of a century. Fiber electronics are likely to have to be upgraded every 10-12 years, but manufacturers are attuned to technology upgrades that allow older customer devices to remain even after an upgrade. When considering replacement costs and ongoing maintenance expenses, fiber might be the lowest-cost technology over long time frames.

Categories
Technology

Powering the Future

For years there have been predictions that the world would be filled with small sensors that would revolutionize the way we live. Five years ago, there were numerous predictions that we’d be living in a cloud of sensors. The limitation on realizing that vision has been figuring out how to power sensors and the other electronics. Traditional batteries are too expensive and have a limited life. As you might expect, scientists from around the world have been working on better power technologies.

Self-Charging Batteries. The California company NDB has developed a self-charging battery that could remain viable for up to 28,000 years. Each battery contains a small piece of recycled radioactive carbon-14 that comes from recycled nuclear fuel rods. As the isotope decays, the battery uses a heat sink of lab-created carbon-12 diamond which captures the energetic particles of decay while acting as a tough physical barrier to contain the radiation.

The battery consists of multiple layers of radioactive material and diamond and can be fashioned into any standard batter size like a AAA. The overall radiation level of the battery is low – at less than the natural radiation emitted by the human body. Each battery is effectively a small power generator in the shape of a traditional battery that never needs to be recharged. One of the most promising aspects of the technology is that nuclear power plants pay NDB to take the radioactive material.

Printed Flexible Batteries. Scientists at the University of California San Diego have been researching batteries that use silver-oxide zinc chemistry. They’ve been able to create a flexible device that offers 10-times the energy density of lithium-ion batteries. The flexible material means that batteries can be shaped to fit devices instead of devices designed to fit batteries.

Silver–zinc batteries have been around for many years, and the breakthrough is that the scientists found a way to screen print the battery material, meaning a battery can be placed onto almost any surface. The printing process paints in a vacuum and layers on the current collectors, zinc anode, the cathode, and separator layers to create a polymer film that is stable up to almost 400 degrees Fahrenheit. The net result is a battery with ten times the power output of a lithium-ion battery of the same size.

Anti-Lasers. Science teams from around the world have been working to create anti-lasers. A laser operates by beaming protons while an anti-laser sucks up photons from the environment. An anti-laser can be used in a laptop or cellphone to collect photons and use them to power the battery in the device.

The scientific name for the method being used is coherent perfect absorption (CPA). In practice, this requires one device that beams out a photon light beam and devices with CPA technology to absorb the beams. In the laboratory, scientists have been able to capture as much as 99.996% of the transmitted power, making this more energy-efficient than plugging a device into electric power. There are numerous possible uses for the technology, starting with the obvious ability to charge devices that aren’t plugged into electricity. But the CPA devices have other possible uses. For example, the devices are extremely sensitive to changes in photons in a room and could act as highly accurate motion sensors.

Battery-Free Sensors. In the most creative solution I’ve read about, MIT scientists started a new firm, Everactive, and have developed sensors that don’t require a battery or external power source. The key to the Everactive technology is the use of ultra-low power integrated circuits which are able to harvest energy from sources like low-light sources, background vibrations, or small temperature differentials.

Everactive is already deploying sensors in applications where it’s hard to change sensors, such as inside steam-generating equipment. The company also makes sensors that monitor rotating machinery and that are powered by the vibrations coming from the machinery. Everactive says its technology has a much lower lifetime cost than traditionally powered sensors when considering the equipment downtime and cost required to periodically replace batteries.

Categories
Technology The Industry

Building Rural Coaxial Networks

Charter won $1.22 billion in the RDOF grant auction and promised on the short-form to build gigabit broadband. Charter won grant areas in 24 states, including being the largest winner in my state of North Carolina. I’ve had several people ask me if it’s possible to build rural coaxial networks, and the answer is yes, but with some caveats.

Charter and other cable companies use hybrid fiber-coaxial (HFC) technology to deliver service to customers. This technology builds fiber to neighborhood nodes and then delivers services from the nodes using coaxial copper cables. HFC networks follows a standard called DOCSIS (Data Over Cable Interface Specification) that was created by CableLabs. Charter currently uses the latest standard of DOCSIS 3.1 that easily allows for the delivery of gigabit download speeds, but something far slower for upload.

There are several distance limitations of an HFC network that come into play when deploying the technology in rural areas. First, there is a limitation of roughly 30 miles between the network core and a neighborhood node. The network core in an HFC system is called a CMTS (cable modem terminating system). In urban markets, a cable company will usually have only one core, and there are not many urban markets where 30 miles is a limiting factor. But 30 miles becomes a limitation if Charter wants to serve the new rural areas from an existing CMTS hub that would normally be located in larger towns or county seats. In glancing through the rural locations that Charter won, I see places that are likely going to force Charter to establish a new rural hub and CMTS. There is new technology available that allows a small CMTS to be migrated to the field, and so perhaps Charter is looking at this technology. It’s not a technology that I’ve seen used in the US, and the leading manufacturers of small CMTs technology are the Chinese electronics companies that are banned from selling in the US. If Charter is going to reach rural neighborhoods, in many cases they’ll have to deploy a rural CMTS in some manner.

The more important distance limitation is in the last mile of the coaxial network. Transmissions over an HFC network can travel about 2.5 miles without needed an amplifier. 2.5 miles isn’t very far, and amplifiers are routinely deployed to boost the signals in urban HFC networks. Engineers tell me that the maximum number of amplifiers that can be deployed is 5, and beyond that number, the broadband signal strength quickly dies. This limitation means that the longest run of coaxial cable to reach homes is about 12.5 miles. That’s 12.5 miles of cable, not 12.5 miles as the crow flies.

To stay within the 12.5-mile limit, Charter will have to deploy a lot of fiber and create rural nodes that might serve only a few homes. This was the same dilemma faces by the big telcos when they were supposed to upgrade DSL with CAF II money – the telcos needed to build fiber deep into rural areas to make it work. The telcos punted on the idea, and we now know that a lot of the CAF II upgrades were never made.

Charter faces another interesting dilemma in building a HFC network. The price of copper has steady grown over the last few decades and copper now costs four times more than in 2000. This means that the cost of buying coaxial cable in relatively expensive (a phenomenon that anybody building a new house knows when they hear the price of new electrical wires). It might make sense in a rural area to build more fiber to reduce the miles of coaxial cable.

Building rural HFC makes for an interesting design. There were a number of rural cable systems built sixty years ago at the start of the cable industry, because these were the areas in places like Appalachia that had no over-the-air TV reception. But these early networks carried only a few channels of TV, meaning that the distance limitations were a lot less critical. But there have been few rural cable networks built in more recent times. Most cable companies have a metric where they won’t build coaxial cable plant anywhere with fewer than 20 homes per road mile. The RDOF grant areas are far below that metric, and one has to suppose that Charter thinks that the grants make the math work.

To answer the original question – it is possible to build rural coaxial networks that can deliver gigabit download speeds. But it’s also possible to take some shortcuts and overextend the amplifier budget and curtail the amount of bandwidth that can be delivered. I guess we’ll have to wait a few years to see what Charter and others will do with the RDOF funding.

 

Categories
Technology

Explaining Open RAN

If you read more than an article or two about 5G and cellular technology you’re likely to run across the term Open RAN. You’re likely to get a sense that this is a good thing, but unless you understand cellular networks the term probably means little else. Open RAN is a movement within the cellular industry to design cellular networks using generic equipment modules so that networks can be divorced from proprietary technologies and can be controlled by software. This is akin to what has happened in big data centers where software now controls generic servers.

The first step in creating Open RAN has been to break the network down into specific functions to allow for the development of generic hardware. Today’s cellular networks have two major components – the core network and the radio access network (RAN). The easiest analog of the core network is to think of it the same as a tandem switching center. Cellular carriers have regional hubs where a set of electronics and switching process the traffic from large numbers of cell sites. The RAN is all of the cell sites where the cellular company maintains a tower and radios to communicate with customers.

Open RAN has broken the cell network into three generic modules. The radio unit (RU) is located near to or is incorporated into the antenna and is the electronics that transmits and receives signals from customers. The distributed unit (DU) is the brains at cell sites. The centralized unit (CU) is a more generic set of core hardware that communicates between the core and the distributed units.

The next step in developing Open RAN has been to ‘open’ the protocols and interfaces between the components of the cellular network. The industry has created the O-RAN Alliance that has developed open-source software that controls all aspects of the cellular network. The software has been developed in eleven generic modules that handle the major functions of the cellular network, For example, there is a software model for controlling the front-haul function between the radio unit and the distributed unit, a module for the mid-haul function between the distributed unit and the centralized unit, etc.

While the industry has created generic open-source software, each large carrier will create their own flavor of the software to configure features the way they want them. Today it’s hard to tell the difference between using AT&T versus T-Mobile, but that is likely to change over time as each carrier develops its own flavor of features.

There are some huge benefits to an Open RAN network. The first is savings on hardware. It’s far less expensive to buy generic radios rather than proprietary radio systems from one of the major vendors. In data centers, we’ve seen the cost of generic switches and servers drop hardware costs by as much as 80%.

But the biggest benefit of Open RAN is the ability to control cell sites with a single software system. Today, the task of updating cell sites to a new feature is a mind-boggling task if the upgrade requires any hardware. That requires a technician to visit every cell site in a nationwide network. Even software upgrades are a challenge and often have to be done today on site since there are numerous configurations of cell sites in a network. With Open RAN, features would be fully software-driven and could all be updated at the same time.

The cellular carriers love the concept because Open RAN frees them to develop unique solutions for customers that are software-driven and not limited by proprietary hardware and software. The industry has always talked about developing specialized features for industries like agriculture or hospitals and Open RAN provides the platform to finally do that. Even better, each major hospital chain could have unique features it desires. This leads to an exciting future where customers can help design their own features rather than accept from a menu of industry features.

Interestingly, the Open RAN concept will also carry over into cellphones, where the best cellphones will have generic chips that can be updated to develop new features without having to upgrade phones every few years.

Converting to Open RAN won’t be cheap or easy because it will ultimately mean scrapping most of the electronics and software being used today at every cell site. We’re likely to first see the big carriers breaking in Open RAN by segments, such as using the solution for small cell sites before converting the big tower sites.

One cellular carrier is likely to take the lead in this movement. Dish Networks is in the process of building a nationwide cellular network from scratch and the company has fully embraced Open RAN. This will put pressure on the other carriers to catch up if Dish’s nimble network starts capturing large nationwide customers.

Categories
Technology

Technology Trends for 2021

The following are the most important current trends that will be affecting the telecom industry in 2021.

Fiber Construction Will Continue Fast and Furious in 2021. Carriers of all shapes and sizes are still building fiber. There is a bidding war going on to get the best construction crews and fiber labor rates are rising in some markets.

The Supply Chain Still has Issues. The huge demand for building new fiber already had already put stress on the supply chain at the beginning of 2020. The pandemic increased the delays as big buyers reacted to the pandemic by re-sourcing some of the supply chain outside of China. By the end of 2021, there is a historically long waiting time to buy fiber for new and smaller buyers as the biggest fiber builders have pre-ordered huge quantities of fiber cable. Going into 2021 the delays for electronics have lessened, but there will be issues with buying fiber for much of 2021. By the end of the year, this ought to return to normal. Any new fiber builder needs to plan ahead and order fiber early.

 Next-Generation PON Prices Dropping. The prices for 10- gigabit PON technologies continue to drop and is now perhaps 15% more expensive than GPON technology which supports speeds up to a symmetrical gigabit. Anybody building a new network needs to consider the next-generation technology, or at least choose equipment that will fit into a future overlay of the faster technology.

Biggest ISPs are Developing Proprietary Technology. In a trend that should worry smaller ISPs, most of the biggest ISPs are developing proprietary technology. The cable companies have always done this through CableLabs, but now companies like Comcast are striking out with their own versions of gear. Verizon is probably leading the pack and has developed proprietary technology for fiber-to-the-curb technology using millimeter wave spectrum as well as proprietary 5G equipment. The large ISPs collectively are pursuing open-source routers, switches, and FTTP electronics that each company will then control with proprietary versions of software. The danger in this trend for smaller ISPs is that a lot of routinely available technology may become hard to find or very expensive when the big ISPs are no longer participating in the market.

Fixed Wireless Gear Improving. The electronics used for rural fixed wireless is improving rapidly as vendors react to the multiple new bands of spectrum approved by the FCC over the last year. The best gear now seamlessly integrates multiple bands of spectrum, and also meets the requirements to notify other carriers when shared spectrum bands are being used.

Big Telcos Walking Away from Copper. AT&T formally announced in October 2020 that it will no longer add new DSL customers. This is likely the first step for the company to phase out copper service altogether. The company has been claiming for years that it loses money on maintaining old technology. Verizon has been even more aggressive and has been phasing out copper service at the local telephone exchange level for the last few years throughout the northeast. DSL budgets will be slashed and DSL techs let go and as bad as DSL is today, it’s going to go downhill fast from here.

Ban on Chinese Electronics. The US ban on Chinese electronics is now in full force. Not only are US carriers forbidden from buying new Chinese electronics, but Congress has approved funding to rip out and replace several billion dollars of currently deployed Chinese electronics. This ostensibly is being done for network security because of fears that Chinese equipment includes a backdoor that can be hacked, but this is also tied up in a variety of trade disputes between the US and China. I’m amazed that we can find $2 billion to replace electronics that likely pose no threat but can’t find money to properly fund broadband.

5G Still Not Here. In 2021 there is still no actual 5G technology being deployed. Instead, what is being marketed today as 5G is really 4G delivered over new bands of spectrum. We are still 3 – 5 years away from seeing any significant deployment of the new features that define 5G. This won’t stop the cellular carriers from crowing about the 5G revolution for another year. But maybe we’ve turned the corner and there will be less than the current twenty 5G ads during a single football game.

Categories
Technology

Understanding Oversubscription

It’s common to hear that oversubscription is the cause of slow broadband – but what does that mean? Oversubscription comes into play in any network when the aggregate subscribed customer demand is greater than the available bandwidth.

The easiest way to understand the concept is with an example. Consider a passive optical fiber network where up to 32 homes share the same neighborhood fiber. In the most common GPON technology, the customers on one of these neighborhood nodes (called a PON) share a total of 2.4 gigabits of download data.

If an ISP sells a 100 Mbps download connection to 20 customers on a PON, then in aggregate, those customers could use as much as 2 gigabits of data, meaning there is still unsold capacity – meaning that each customer is guaranteed the full 100 Mbps connection inside the PON. However, if an ISP sells a gigabit connection to 20 customers, then there are 20 gigabits of potential customer usage that have been pledged over the same 2.4-gigabit physical path. The ISP has sold more than 8 times more capacity to customers than is physically available, and this particular PON has an oversubscription ratio of 8.

When people first hear about oversubscription, they are often aghast – they think an ISP has done something shady and is selling people more bandwidth than can be delivered. But in reality, an oversubscription ratio recognizes how people use bandwidth. It’s highly likely in the example of selling gigabit connections that customers will always have access to their bandwidth.

ISPs understand how customers use bandwidth and they can take advantage of the real behavior of customers in deciding oversubscription ratios. In this example, it’s highly unlikely that any residential customer ever uses a full gigabit of bandwidth – because there is almost no place on the web that where a residential customer can connect at that speed.

But more importantly, a home subscribing to a gigabit connection mostly doesn’t use most of the bandwidth they’ve purchased. A home isn’t using much bandwidth when people are asleep or away from home. The residents of a gigabit home might spend the evening watching a few simultaneous videos and barely use any bandwidth. The ISP is banking on the normal behavior of its customers in determining a safe oversubscription ratio. ISPs have come to learn that households buying gigabit connections often don’t use any more bandwidth than homes buying 100 Mbps connections – they just complete web transactions faster.

Even should bandwidth in this example PON ever get too busy, the issue is likely temporary. For example, if a few doctors lived in this neighborhood and were downloading big MRI files at the same time, the neighborhood might temporarily cross the 2.4-gigabit available bandwidth limit. Since transactions happen quickly for a gigabit customer, such an event would not likely last very long, and even when it was occurring most residents in the PON wouldn’t see a perceptible difference.

It is possible to badly oversubscribe a neighborhood. Anybody who uses a cable company for broadband can remember back a decade when broadband slowed to a crawl when homes started watching Netflix in the evening. The cable company networks were not designed for steady video streaming and were oversubscribing bandwidth by factors of 200 to one or higher. It became routine for the bandwidth demand for a neighborhood to significantly surpass network capacity, and the whole neighborhood experienced a slowdown. Since then, the cable companies have largely eliminated the problem by decreasing the number of households in a node.

As an aside, ISPs know they have to treat business neighborhoods differently. Businesses might engage in steady large bandwidth uses like connecting to multiple branches, using software platforms in the cloud, using cloud-based VoIP, etc. An oversubscription ratio that works in a residential neighborhood is likely to be far too high in some business neighborhoods.

To make the issue even more confusing, the sharing of bandwidth at the neighborhood level is only one place in a network where oversubscription comes into play. Any other place inside the ISP network where customer data is aggregated and combined will face the same oversubscription issue. The industry uses the term chokepoint to describe a place in a network where bandwidth can become a constraint. There is a minimum of three chokepoints in every ISP network, and there can be many more. Bandwidth can be choked in the neighborhood as described above, can be choked in the primary network routers that direct traffic, or can be choked on the path between the ISP and the Internet. If any chokepoint in an ISP network gets over-busy, then the ISP has oversubscribed the portion of the network feeding into the chokepoint.

Categories
Current News Technology

Quantum Encryption

Verizon recently conducted a trial of quantum key distribution technology, which is the first generation of quantum encryption. Quantum cryptography is being developed as the next-generation encryption technique that should protect against hacking from quantum computers. Carriers like Verizon care about encryption because almost every transmission inside of our communications paths are encrypted.

The majority of encryption today uses asymmetric encryption. That means encryption techniques rely on the use of secure keys. To use an example, if you want to send encrypted instructions to your bank (such as to pay your broadband bill), your computer uses the publicly available key issued by the bank to encode the message. The bank then uses a different private key that only it has to decipher the message.

Key-based encryption is safe because it takes immense amounts of computing power to guess the details of the private key. Encryption methods today mostly fight off hacking by using long encryption keys – the latest standard is a key consisting of at least 2048 bits.

Unfortunately, the current decryption methods won’t stay safe for much longer. It seems likely that quantum computers will soon have the capability of cracking today’s encryption keys. This is possible since quantum computers can perform thousands of simultaneous calculations and could cut down the time needed to crack an encryption key from months or years down to hours. Once a quantum computer can do that, then no current encryption scheme is safe. The first targets for hackers with quantum computers will probably be big corporations and government agencies, but it probably won’t take long to turn the technology to hack into bank accounts.

Today’s quantum computers are not yet capable of cracking today’s encryption keys, but computing experts say that it’s just a matter of time. This is what is prompting Verizon and other large ISPs to look for a form of encryption that can withstand hacks from quantum computers.

Quantum key distribution (QKD) uses a method of encryption that might be unhackable. Photons are sent one at a time through a fiber optic transmission to accompany an encrypted message. If anybody attempts to intercept or listen to the encrypted stream the polarization of the photons is impacted and the recipient of the encrypted message instantly knows the transmission is no longer safe. The theory is that this will stop hackers before they know enough to crack into and analyze a data stream.

The Verizon trial added a second layer of security using a quantum random number generator. This technique generates random numbers and constantly updates the decryption keys in a way that can’t be predicted.

Verizon and others have shown that these encryption techniques can be performed over existing fiber optics lines without modifying the fiber technology. There was a worry in early trials of the technology that new types of fiber transmission gear would be needed for the process.

For now, the technology required for quantum encryption is expensive, but as the price of quantum computer chips drops, this encryption technique ought to become affordable and be available to anybody that wants to encrypt a transmission.