You might remember press releases from AT&T in 2018 that promised a revolution in rural broadband from a technology called AirGig. The technology was described as using millimeter-wave spectrum to shoot focused radio beams along power lines, with the electric field of the powerlines somehow acting to keep the transmissions focused to follow the wires.
AT&T said at the time that the technology could deliver hundreds of megabits of data to rural homes using a network built from inexpensive plastic components mounted on power lines. The last I heard of the technology was this AT&T video released in 2019.
There had been a field trial of the technology conducted with Georgia Power, and the CEO of the electric company was enthusiastic at the time about the technology. AT&T talked about starting the process of manufacturing hardware. And then . . . crickets. There hasn’t been a word on the web about the technology since then.
I saw articles published by IEEE in 2019 that talked about a different broadband-over-powerline (BPS) technology developed by Panasonic. IEEE amended the standard for BPL to recognize Panasonic’s HD-PLC technology. Panasonic claims to have reached 60 Mbps transmissions using the technology but thought they could goose this to several hundred Mbps.
I always wondered how much of the AT&T announcement on AirGig was hype. Timing-wise, the AT&T AirGig announcement came in the middle of the 5G craze where the cellular carriers were trying to gain major concessions from the government to promote 5G. AT&T and the other carriers wanted a lot more spectrum – and they’ve largely gotten it. Perhaps they were using the AirGig to justify more spectrum. But the video shows that AT&T has gotten a pile of patents for the technology, so it seems to be the real deal.
Today’s blog asked what happened, and I hope somebody who knows will say. Did field trials reveal a fatal flaw in the technology? That’s always possible with any wireless technology. Did the technology just underperform and not deliver the promised broadband speeds? Or will AT&T spring a finished technology on the world one of these days?
Somebody asked me to explain software defined networking (SDN), and I thought a good way to answer the question was to send them to an article that explains the concept. I couldn’t find anything on the web that explains SDN in plain English. This is not unusual for technical topics since tech guys generally have problems explaining what they do to laypeople. They hate boiling things down to simple language because a simple description doesn’t capture the nuances of the technology. I’ve always challenged engineers I work with to explain what they do in a way that their mother could understand – and most look at me like I’m an alien. I won’t promise that this is in plain English, but here is my shot at explaining SDN to a non-technical person.
The basis for SDN is that it is a technology that allows networks to be centrally and intelligently controlled or programmed. What does that mean?
There was a time in early computing when a network owner purchased all of the network gear from one vendor. Doing so made it possible to control the network with one set of software as long as the network owner could master the protocols used by the vendor. This sent a whole generation of IT technicians to become Cisco certified to prove that they had mastered Cisco network gear.
But it’s no longer reasonable today to have a complex network provisioned from one vendor. For one thing, most networks now use the cloud to some extent as part of the network – meaning they use computing power that is outside the direct control of the network owner. The pandemic has also forced most companies into allowing their network to communicate with remote employees – something that many companies refused to consider in the past. Networks have also gotten more complex due to the need to control Internet of Things devices – networks don’t just communicate with computers anymore.
The first goal of SDN is to bring everything under one big software umbrella. SDN provides a software platform that lets a network owner visualize the entire network. What does that mean in plain English? The goal of a network owner is to efficiently flow data to where it needs to go and to do so safely. It’s incredibly challenging to understand the flow of data in a network comprised of multiple devices, multiple feeds to and from the outside world, and constantly shifting demand from users on how they want to use the data.
SDN is a software platform that enables the network owner to see the data flow between different parts of the platform. Modern SDN technology has evolved from the OpenFlow protocol developed in 2008 in a collaboration between Stanford University and the University of California at Berkeley. The original platform enabled a network owner to measure and manage data traffic between routers and switches, regardless of the brand of equipment.
Over time, SDN has grown in sophistication and can do much more. As an example, with SDN, a network owner can set different levels of security for different parts of the network. A network operator might wall off traffic between remote employees and core data storage so that somebody working remotely can’t get access to some parts of the network. SDN software provides a way to break a network into subsets and treat each of them differently in terms of security protocols, the priority of routing, and access to other parts of the network. This is something that can’t easily be done by tinkering with the software settings of each individual router and switch – which is what network operators tried to do before SDN.
There have been huge benefits from SDN. Probably the biggest is that SDN allows a network owner to use generic white-box devices in the network – inexpensive routers and switches that are not pre-loaded with expensive vendor software. The SDN software can direct the generic devices to perform a needed function without the box needing to be pre-programmed. That’s the second big benefit of SDN – the whole network can be programmed as if every device came from the same vendor. The SDN software can tell each part of the network what to do and can even override preset functions from vendors.
It’s not hard to see why this is hard for a network engineer to explain because they don’t want to explain the primary goals of SDN without dipping into how it does all of this – and that is something that is incredibly hard to explain without using technical language and jargon. For that, I’d send you to the many articles written on the topic.
Most people have heard of latency, which is a measure of the average delay of data packets on a network. There is another important measure of network quality that is rarely talked about. Jitter is the variance in the delays of signals being delivered through a broadband network connection. Jitter occurs when the latency increases or decreases over time.
We have a tendency in the industry to oversimplify technical issues. We take a speed test and assume the answer that pops out is our speed. Those same speed tests also measure latency, and even network engineers sometimes get mentally lazy and are satisfied to see an expected latency number on a network test. But in reality, the broadband signal coming into your home is incredibly erratic. From millisecond to millisecond, the amount of data hitting your home network varies widely. Measuring jitter means measuring the degree of network chaos.
Jitter increases when networks get overwhelmed, even temporarily. Delays are caused in any network when the amount of data being delivered exceeds what can be accepted. There are a few common causes of increased jitter:
· Not Enough Bandwidth. Low bandwidth connections experience increased jitter when incoming packets exceed the capacity of the broadband connection. This effect can cascade and multiply when the network is overwhelmed – being overly busy increases jitter, and the worse jitter then makes it even harder to receive incoming packets.
· Hardware Limitations. Networks can bog down when outdated routers, switches, or modems can’t fully handle the volume of packets. Even issues like old or faulty cabling can cause delays and increase jitter.
· Network Handoffs. Any network bottlenecks are the most vulnerable point in the network. The most common bottleneck in all of our homes is the device that converts landline broadband into WiFi. Even a slight hiccup at a bottleneck will negatively impact performance in the entire network.
All of these factors help to explain why old technology like DSL performs even worse than might be expected. Consider a home that has a 15 Mbps download connection on DSL. If an ISP were to instead deliver a 15 Mbps connection on fiber, the same customer would see a significant improvement. A fiber connection would avoid the jitter issues caused by antiquated DSL hardware. We tend to focus on speeds, but a 100 Mbps connection on a fiber network will typically have a lot less jitter than a 100 Mbps connection on a cable company network. Customers who try a fiber connection for the first time commonly say that the network ‘feels’ faster – what they are noticing is the reduced jitter.
Jitter can be deadliest to real-time connections – most people aren’t concerned about jitter if means it takes a little longer to download a file. But increased jitter can play havoc with an important Zoom call or with maintaining a TV signal during a big sports event. It’s easiest to notice jitter when a real-time function hesitates or fails. Your home might have plenty of download bandwidth, and yet broadband connections still fail because small problems caused by jitter can accumulate to make the connection fail.
ISPs have techniques that can help to control jitter. One of the more interesting ones is to use a jitter buffer that grabs and holds data packets that arrive too quickly. It may not feel intuitive that slowing a network can improve quality. But recall that jitter is caused when there is a time delay between different packets in the same transmission. There is no way to make the slowest packets arrive any sooner – so slowing down the fastest ones increases the chance that Zoom call packets can be delivered evenly.
Fully understanding the causes of jitter in any specific network is a challenge because the causes can be subtle. It’s often hard to pinpoint a jitter problem because it can be here one millisecond and gone the next. But it’s something we should be discussing more. A lot of the complaints people have about their broadband connection are caused by too-high jitter.
From a cost perspective, we’re not seeing any practical difference between the price of XGS-PON that offers a 10-gigabit data path and traditional GPON. I have a number of clients now installing XGS-PON, and we now recommend it for new fiber projects. I’ve been curious about how ISPs are going to deploy the technology in residential and small-business neighborhoods.
GPON has been the technology of choice for well over a decade. The GPON technology delivers a download path of 2.4-gigabits bandwidth to each neighborhood PON. Most of my clients have deployed GPON in groups of up to 32 customers in a neighborhood PON. In practical deployment, most of them pack a few less than 32 onto the typical GPON card.
I’m curious about how ISPs will deploy XGS-PON. From a pure math perspective, an XGS-PON network delivers four times as much bandwidth to each neighborhood than GPON. An ISP could maintain the same level of service to customers as GPON by packing 128 customers onto each GPON card. But network engineering is never that nicely linear, and there are a number of factors to consider when designing a new network.
All ISPs rely on oversubscription when deciding the amount of bandwidth needed for a given portion of a network. Oversubscription is shorthand for taking advantage of the phenomenon that customers in a given neighborhood rarely all use the bandwidth they’ve been assigned, and never all use it at the same time. Oversubscription allows an ISP to feel safe in selling gigabit broadband to 32 customers in a GPON network and knowing that collectively they will not ask to use more than 2.4 gigabits at the same time. For a more detailed description of oversubscription, see this earlier blog. There are ISPs today that put 64 customers or more on a GPON card – the current capacity is up to 128 customers. ISPs understand that putting too many customers on a PON card will start to emulate the poor behavior we see in cable company networks that sometimes bog down at busy times.
Most GPON networks today are not overstressed. Most of my clients tell me that they can comfortably fit 32 customers onto a GPON card and only rarely see a neighborhood maxed out in bandwidth. But ISPs do sometimes see a PON that gets overstretched if there are more than a few heavy users in the same PON. The easiest solution to that issue today is to reduce the number of customers in a busy PON – such as splitting into two 16-customer PONs. This isn’t an expensive issue because over-busy PONs are still a rarity.
ISPs understand that year after year that customers are using more bandwidth and engaging in more data-intensive tasks. Certainly, a PON with half a dozen people now working from home is a lot busier than it was before the pandemic. It might be years before a lot of neighborhood PONs get overstressed, but eventually, the growth in bandwidth demand will catch up to the GPON capacity. As a reminder, the PON engineering decision is based on the amount of demand at the busiest times of the day. That busy hour level of traffic is not growing as quickly as the overall level of bandwidth used by homes – which more than doubled in just the last three years.
There are other considerations in designing XGS-PON. Today, the worst that can happen with a PON failure is for 32 customers to lose bandwidth if a PON card fails. It feels riskier from a business perspective to have 128 customers sharing a PON card – that’s a much more significant network outage.
There is no magic metric for an ISP to use. You can’t fully trust vendors because they are going to sell more PON cards if an ISP were to be extremely conservative and put only 32 customers on a 10-gigabit PON. The ISP owners might not feel comfortable leaping to 128 or more customers on a PON. There are worse decisions to have to make because almost any configuration of PON oversubscription will work on a 10-gigabit network. The right solution will balance the need to make sure that customers get the bandwidth they request without being so conservative that the PON cards are massively underutilized. Over time, ISPs will develop internal metrics that work with their service philosophy and the demands of their customer base.
There is an interesting battle going on to be the technology that monetizes the control of Internet of Things devices. Like a lot of tech hype, IoT has developed a lot slower than originally predicted – but it’s now finally becoming a big business. I think back to a decade ago when tech prognosticators said we’d soon be living in a virtual cloud of small monitors that would monitor everything in our life. According to those early predictions, our farm fields should already be fully automated, and we should all be living in the smart home envisioned by the Jetsons. Those predictions probably say more about the tech press that hypes new technologies than about IoT.
I’ve been noticing increasing press releases and articles talking about different approaches to monetizing IoT traffic. The one that we’ve all heard the most about is 5G. The cellular companies told Wall Street five years ago that the monitoring of IoT devices was going to fuel the 5G business plan. The wireless companies envisioned households all buying a second cellular subscription to monitor devices.
Except in a few minor examples, this business plan never materialized. I was reminded of it this week when I saw AT&T partnering with Smart Meter to provide patient monitoring for chronic conditions like diabetes and high blood pressure. The monitoring devices worn by patients include a SIM card, and patients can be monitored anywhere within range of a cellular signal. It’s a great way for AT&T to monetize IoT subscriptions – in this case, with monthly fees likely covered by health insurance. It sounds like an awesome product.
Another player in the IoT world is LEO satellites. In August of last year, Starlink made a rare acquisition by buying Swarm. This company envisions using satellites to be able to monitor outdoor IOT devices anywhere in the world. The Swarm satellites are less than a pound each, and the Swarm website says the goal is to have three of these small satellites in range of every point on earth by the end of 2022. That timeline slowed due to the purchase by Starlink, but this could be a huge additional revenue stream for the company. Swarm envisions putting small receivers in places like fields. Like with Starlink, customers must buy the receivers, and there is an IoT data monitoring plan that will allow the collection of 750 data packets per month for a price of $60 per year.
Also still active in pursuing the market are a number of companies promoting LoRaWAN technology. This technology uses tall towers or blimps and CBRS or some other low-power spectrum to communicate with IoT monitors over a large geographic area. The companies developing this technology can be found at the LoRa Alliance.
Of course, the current king of IoT is WiFi. Charter recently said it is connected to 5 billion devices on its WiFi network. WiFi has the advantage of a free IoT connection for the price of buying a broadband connection.
Each of these technologies has a natural market niche. The AT&T health monitoring system only makes sense on a cellular network since patients need to be monitored everywhere they go during the day. Cellular should be the go-to technology for mobile monitoring. The battle between LoRaWAN and satellites will be interesting and will likely eventually come down to price. Both technologies can be used to reach farm fields where cellular coverage is likely to never be ubiquitous. WiFi is likely to carry the signals from the devices in our homes – the AT&T vision of everybody buying an IoT cellular data plan sounds extremely unlikely since we all can have the same thing for the cost of a WiFi router.
The non-stop wireless industry claims that we’ve moved from 4G to 5G finally slowed to the point that I stopped paying attention to it during the last year. There is an interesting article in PC Magazine that explains why 5G has dropped off the front burner.
The article cites interviews with Art Pouttu of Finland’s University of Oulu about the current state and the future of 5G. That university has been at the forefront of the development of 5G technology and is already looking at 6G technology.
Pouttu reminds us that there is a new ‘G” generation of wireless technology about every ten years but that it takes twenty years for the market to fully embrace all of the benefits of a new generation of wireless technology.
We are just now entering the heyday of 4G. The term 4G has been bantered around by wireless marketing folks for so long that it’s hard to believe that we didn’t see a fully-functional 4G cell site until late in 2018. Since then, the cellular companies have beefed up 4G in two ways. First, the technology is now spread through cell sites everywhere. But more importantly, 4G systems have been bolstered by the addition of new bands of cellular spectrum. The marketing folks have gleefully been labeling this new spectrum as 5G, but the new spectrum is doing nothing more than supporting the 4G network.
I venture to guess that almost nobody thinks their life has been drastically improved because 4G cellphone speeds have climbed in cities over the last few years from 30 Mbps to over 100 Mbps. I can see that faster speed on my cellphone if I take a speed test, but I haven’t really noticed much difference between the performance of my phone today compared to four years ago.
There are two major benefits from the beefed-up 4G. The first benefits everybody but has gone unnoticed. The traditional spectrum bands used for 4G were getting badly overloaded, particularly in metropolitan areas. The new bands of spectrum have relieved the pressure on cell sites and are supporting the continued growth in cellular data use. Without the new spectrum, our 4G experience would be deteriorating.
The new spectrum has also enabled the cellular carriers to all launch rural fixed cellular broadband products. Before the new spectrum, there was not enough bandwidth on rural cell sites to support both cellphones and fixed cellular customers. The many rural homes that can finally buy cellular broadband that is faster than rural DSL are the biggest winners.
But those improvements have nothing to do with 5G. The article points out what has always been the case. The promise of 5G has never been about better cellphone performance. It’s always been about applications like using wireless spectrum in complex settings like factories where feedback from huge numbers of sensors needs to be coordinated in real-time.
The cellular industry marketing machine did a real number on all of us – but perhaps most of all on the politicians. We’ve had the White House, Congress, and State politicians all talking about how the U.S. needed to win the 5G war with China – and there is still some of that talk going around today. This hype was pure rubbish. What the cellular carriers needed was more spectrum from the FCC to stave off the collapse of the cellular networks. But no cellular company wanted to crawl to Congress begging for more spectrum, because doing so would have meant the collapse of cellular company stock prices. Instead, we were fed a steady diet of false rhetoric about how 5G was going to transform the world.
The message from the University of Oulu is that most 5G features are probably still five or six years away. But even when they finally get here, 5G is not going to bring much benefit or change to our daily cellphone usage. It was never intended to do that. We already have 100 Mbps cellular data speeds with no idea how to use the extra speed on our cellphones.
Perhaps all we’ve learned from this experience is that the big cellular companies have a huge amount of political and social clout and were able to pull the wool over everybody’s eyes. They told us that the sky was falling and could only be fixed with 5G. I guess we’ll find out in a few years if we learned any lesson from this because we can’t be far off from hearing the hype about 6G. This time it will be 100% hype because 6G deals with the use of extremely short frequencies that will never be used in outdoor cellular networks. But I have a feeling that we’ll find ourselves in a 6G war with China before we know it.
One of the first in-depth reviews I’ve found for T-Mobile’s fixed cellular broadband was published in the Verve. It’s not particularly flattering to T-Mobile, and this particular customer found the performance to be unreliable – fast sometimes and barely functioning at other times. But I’ve seen other T-Mobile customers raving about the speeds they are receiving.
We obviously can’t draw any conclusions based upon a single review by one customer, but his experience and the contrasting good reviews by others prompted me to talk about why performance on cellular broadband networks can vary so significantly.
I’ve always used the word wonky to describe cellular performance. It’s something I’ve tracked at my own house, and for years the reception of the cellular signal in my home office has varied hour-by-hour and day-by-day. This is a basic characteristic of cellular networks that you’ll never find the cellular carriers talking about or admitting.
The foremost issue with cellular signal strength is the distance of a customer from the local cellular tower. All wireless data transmissions weaken with distance. This is easy to understand. Wireless transmissions spread after they leave a transmitter. The traditional way we depict a wireless transmission shown in the diagram below demonstrates the spread. If two customers have the same receiver, a customer who is closer to the tower will receive more data bits sooner than somebody who is further after the signal has spread. The customer in the bad review admitted he wasn’t super close to a cell tower, and somebody in his own neighborhood who lives closer to the cell site might have a stronger signal and a better opinion of the product.
There are other factors that create variability in a cellular signal. One is basic physics and the way radio waves behave outdoors. The cellular signal emanating from your local cell tower varies with the conditions in the atmosphere – the temperature, humidity, precipitation, and even wind. Things that stir up the air will affect the cellular signal. A wireless signal in the wild is unpredictable and variable.
Another issue is interference. Cellular companies that use licensed spectrum don’t want to talk about interference, but it exists everywhere. Some interference comes from natural sources like sunspots. But the biggest source of interference is the signal from other cell towers. Interference occurs any time there are multiple sources of the same frequency being used in the same area.
The customer in the review talks about the performance differing by the time of day. That is a phenomenon that can affect all broadband networks and is specific to the local robustness of the T-Mobile network. Performance drops when networks start getting too busy. Every DSL customer or cable company broadband customer has witnessed the network slowing at some times of the day. This can be caused by too many customers sharing the local network – in this case, the number of customers using a cell tower at the same time. The problem can also because caused by high regional usage if multiple cell towers share the same underlying broadband backbone.
The final issue that is somewhat unique to cellular networks is carrier priority. It’s highly likely that T-Mobile is giving first priority to customers using cell phones. That’s the company’s primary source of revenue, so cell phones get first dibs at the bandwidth. That means in busy times that the data left over for the fixed cellular customers might be greatly pinched. As T-Mobile and other carriers sell more of the fixed product, I predict the issue of having second priority will become a familiar phenomenon.
This blog is not intended to be a slam against fixed cellular broadband. The customer that wrote the review switched to cellular broadband to get a less expensive connection than from his cable company. This customer clearly bought into the T-Mobile advertising hype because a cellular broadband signal will never be as reliable as a signal delivered through wires.
We can’t forget the real promise of fixed cellular broadband – bringing broadband to folks who have no alternatives. Somebody that switched to T-Mobile from a 1 Mbps rural DSL product would have written a different and more glowing review of the same product. The bottom line is that anybody buying cellular broadband should recognize that it’s a wireless product – and that means the product comes with the quirks and limitations that are inherent with wireless broadband. I imagine that we’re going to continue to see bad reviews from customers who want to save money but still want the performance that comes with wired broadband. This is another reminder that it’s a mistake to judge a broadband product strictly by the download speed – a 100 Mbps cellular broadband product is not the same as a 100 Mbps cable company connection.
One of the consequences of our increased use of broadband is a big increase in the amount the data that we store outside our homes and businesses. The numbers are becoming staggering. There are currently about 3.7 billion people using the Internet, and together we generate 2.5 quintillion bytes of online data every day. The trends are that by 2025 we’ll be storing 160 zettabytes of data per year – a zettabyte is one trillion gigabytes.
I store a lot more data online than I used to. I now store things in the cloud all day long. When I edit a Word or Excel file, my changes are all stored in the cloud. I also back up every change on my computer every day. I write and store these blogs on a WordPress server. Copies of my blogs are automatically posted and stored on Twitter and LinkedIn. My company’s accounting records are stored online. When my car pulls into a driveway, it uploads diagnostics into the cloud. Pictures I take on my cellphone are automatically saved. I have no idea what else is being shared and saved by apps and software that I routinely use. As recently as a few years ago, I had very little interaction with the cloud, but I now seemingly live and work in the cloud.
It may be hard to believe, but in the foreseeable future, we’ll be facing a data storage crisis. We can’t afford the resources to be able to store data in the same way we do today. Data centers now use nearly 20% of the electricity used by technology. A single data center uses more electricity than a small town. We’re consuming electric generation resources and spinning off huge amounts of carbon dioxide to be able to save the 45 pictures taken at the birthday party you attended last night.
One of the obvious solutions to the data storage challenge is to throw away data. But who gets to decide what gets kept? The alternative is to find better methods of data storage that don’t require as much energy or take as much space. There are several areas of research into better storage – none is yet ready for prime time, but the ideas are intriguing.
5D Optical Storage.
Researchers at the University of Southampton are exploring data storage using lasers to etch into cubes of silicon glass. The technique is being called 5F, because in addition to using the normal three axes as storage parameters they are also using the size of a recorded record and the orientation. Think of this as a 3D version of the way we used to store data on compact disks. This technology would be used for long-term storage since something that is etched into the glass is permanent. Storing data in glass requires no power, and the glass cubes are nearly indestructible. One small cube could store hundreds of terabytes of data.
Researchers at the University of Manchester are taking a different approach and looking at the benefits of storing data at super-cold temperatures. They have developed man-made molecules that can store several hundred times more data than in the equivalent space on current hard drives. The key is to store the molecules at low temperatures. This is the same research group that discovered graphene and that works with unique molecular structures. Scientists have known that storage at lower temperatures can work, and the big breakthrough is having this technology work at 80 Kelvin using liquid nitrogen (which is significantly warmer than past work near to absolute zero using liquid helium). Since our atmosphere is mostly nitrogen the frozen gas is inexpensive to produce. Scientists are hoping that the molecules will be able to store data for a long time, even if losing power.
DNA Storage. Scientists have been intrigued for over a decade about using DNA as a storage medium. DNA could be an idea storage media because it’s made from our base-pair amino acids, and the convoluted coiled structure provides a lot of storage capacity in a condensed space. A team at Harvard was able to store the code for a video on a strand of bacterial DNA. Since then, the commercial company Catalog has been working to perfect the technology. The company believes it is close to a breakthrough by using a synthetic version of a DNA molecule rather than living tissue. Data could be written to the molecule as it’s being assembled. Like with etched glass, this is permanent storage and highly promising. In the past summer, the company announced it was able to record the full 16 Gigabytes of Wikipedia into a tiny vial of the material.
We need these technologies and others to work if we don’t want to drown in our own data.
The WiFi 6 standard was just approved in 2020 and is starting to find its way into home and business WiFi networks. If you’ve purchased a new WiFi router recently, there is a decent chance that it can support WiFi 6. However, the benefits of the new WiFi aren’t going to benefit a home until you’ve upgraded devices like TVs, computers, and various IoT devices to use the new standard. It’s likely to take years for WiFi 6 to get fully integrated into most homes.
But that hasn’t stopped vendors from already working on the next generation of WiFi technology, naturally being called WiFi 7. WiFi 7 promises faster speeds and lower latency and will be aimed at maximizing video performance. Qualcomm says it expects full WiFi 7 to become available after 2024. WiFi 7 will be using the new WiFi specification 802.11be.
The speed capabilities have climbed with each subsequent generation of WiFi. WiFi 5, which most of you are running in your home today has a maximum speed capability of 3.5 Gbps. WiFi 6 stepped maximum speeds up to 9.6 Gbps. The early specifications for WiFi 7 call for maximum data speeds of 30 Gbps. While most of us will never tax the capabilities of WiFi 5, faster speeds are important because it means a WiFi signal can burst huge amounts of data in a short period of time.
WiFi 7 isn’t going to require additional WiFi spectrum – but more spectrum helps. The federal Court of Appeaks for Washington DC just recently confirmed the FCC’s allocation of 6 GHz spectrum for WiFi use. The NCTA, representing the big cable companies, recently filed a request with the FCC asking the agency to consider opening additional new bands of free public spectrum for WiFi using 7 GHz spectrum and lower 3 GHz spectrum. The trade group argues that WiFi has created the largest public benefit of any spectrum band that FCC has ever authorized. The trade association argues that the world is finally becoming awash in Internet of Things devices, with Charter alone connecting to half a billion IoT devices.
There are two big changes that will differentiate WiFi 7 from WiFi 6. First is a major upgrade to the WiFi upload link. WiFi 7 will incorporate uplink multiuser multiple-input multiple-output (UL MU-MIMO) technology. The new technology creates multiple paths between a router and a WiFi-connected device. Connecting multiple paths to a device will significantly increase the amount of data that can be transmitted in a short period of time. WiFi 6 allows for a theoretical eight simultaneous paths – WiFi 7 increases that to sixteen paths.
WiFi 7 will also bring another improvement labeled as coordinated multiuser MMO (CMU-MIMO). CMU-MIMO will let a home device connect to more than one WiFi router at the same time. Picture your computer connected to several channels from different home routers. This coordination should result in faster connections, lower latency, and the ability to deliver high bandwidth to every corner of a home that is equipped with multiple WiFi access points. This is the most complicated challenge in the WiFi 7 specification.
WiFi 7 promises other improvements as well. The 802.11be specification allows for combining spectrum paths. Today’s WiFi routers use one channel of spectrum for a single device, and the planned upgrade would allow devices to combine signal paths from different WiFi frequencies at the same time. Another slated improvement is an upgrade to allow the use of 4096-QAM. The QAM technology will allow the combination of more than one frequency modulation in the same data path.
The 801.11be specification is pushing the limits of physics in a few places and may never fully achieve everything being promised. But it represents another huge upgrade for WiFi. There are a few vendors that will be previewing early versions of WiFi 7 technology at CES 2022. Maybe most of us will at least have made the transition to WiFi 6 before this latest and greatest WiFi is available.
University of Chicago students conducted a study on and near the campus, looking at how LAA (Licensed Assisted Access) affects WiFi. Cellular carriers began using LAA technology in 2017. This technology allows a cellular carrier to snag unlicensed spectrum to create bigger data pipes than can be achieved with the traditional cellular spectrum. When cellular companies combine frequencies using LAA, they can theoretically create a data pipe as large as a gigabit while only using 20 MHz of licensed frequency. The extra bandwidth for this application comes mostly from the unlicensed 5 GHz band and can match the fastest speeds that can be delivered with home routers using 802.11AC.
There has always been an assumption that the cellular use of LAA technology would interfere to some extent with WiFi networks. But the students found a few examples where using LAA killed as much as 97% of local WiFi network signal strength. They found that when LAA kicked in that the performance on nearby WiFi networks always dropped.
This wasn’t supposed to happen. Back when the FCC approved the use of LAA, the cellular carriers all said that interference would be at a minimum because WiFi is mostly used indoors and LAA is used outdoors. But the study showed there can also be a big data drop for indoor WiFi routers if cellular users are in the vicinity. That means people on the street can interfere with the WiFi strength in a Starbucks (or your home).
The use of WiFi has also changed a lot since 2017, and during the pandemic, we have installed huge numbers of outdoor hotspots for students and the public. This new finding says that LAA usage could be killing outdoor broadband established for students to do homework. Students didn’t just use WiFi hotspots when they couldn’t attend school, but many relied on WiFi broadband in the evenings and weekends to do homework. Millions of people without home broadband also use public WiFi hotspots.
LAA usage kills WiFi usage for several reasons. WiFi is a listen-before-talk technology. This means that when a WiFi device wants to grab a connection to the router that the device gets in line with other WiFi devices and is not automatically connected immediately. LAA acts like all cellular traffic and immediately grabs bandwidth if it is available, This difference in the way of using spectrum gives LAA a priority to grab frequency first.
LAA connections also last longer. You may not realize it, but devices using WiFi devices don’t connect permanently. WiFi routers connect to devices in 4-millisecond bursts. In a home where there aren’t many devices trying to use a router, these bursts may seem continuous, but in a crowded place with a lot of WiFi users, devices have to pause between connections. LAA bursts are 10-milliseconds instead of 4-ms for WiFi. This means that LAA devices both connect immediately to unlicensed spectrum and also keep the connection longer than a WiFi device. It’s not hard for multiple LAA connections to completely swamp a WiFi network.
This is a perfect example of how hard it is to set wireless policy. The FCC solicited a lot of input when the idea of sharing unlicensed spectrum with cellular carriers was first raised. At the time, the technology being discussed was LTE-U, a precursor to LAA. The FCC heard from everybody in the industry, with the WiFi industry saying that cellular use could overwhelm WiFi networks and the cellular industry saying that concerns were overblown. The FCC always finds itself refereeing between competing concerns and has to pick winners in such arguments. The decision by the FCC to allow cellular carriers to use free public spectrum highlights another trend – the cellular companies, by and large, get what they want.
It will be interesting to see if the FCC does anything as a result of this study and other evidence that cellular companies have gone a lot further with LAA than promised. I won’t hold my breath. AT&T also announced this week that it is starting to test LAA using the unlicensed portion of the 6 GHz spectrum.