A New Fiber Optic Speed Record

Researchers at University College London (UCL) have set a new bandwidth record for fiber optic bandwidth transmission. They’ve been able to communicate through a fiber optic cable at over 178 terabits per second, or 178,000 gigabits per second. The research was done in collaboration with fiber optic firms Xtera and KDDI Research. The press release of the achieved speed claims this is 20% faster than the previously highest achieved speed.

The achieved speed has almost reached the Shannon limit, which defines the maximum amount of error-free data that can be sent over a communications channel. Perhaps the most impressive thing about the announcement was that UCL scientists achieved this speed over existing fiber optic cables and didn’t use pristine fiber installed in a laboratory.

The fast signal throughput was achieved by combining several techniques. First, the lasers use raman amplification, which involves injecting photons of lower energy into a high-frequency photon stream. This produces predictable photon scattering which can be tailored to the characteristics needed for optimally traveling through glass fiber.

The researchers also used Erbium-doped fiber amplifiers. To those who have forgotten the periodic table, erbium is a commonly found metal in nature with an atomic weight of 68. Erbium has a key characteristic needed for fiber optic amplifiers in that the metal efficiently amplifies light in the wavelengths used by fiber optic lasers.

Finally, the amplifiers used for the fast speeds used semiconductor optical amplifiers (SOA). These are diodes that have been treated with anti-reflection coatings so that the laser light signal can pass through with the least amount of scattering. The net result of all of these techniques is that the scientists were able to reduce the amount of light that is scattered during the transmission though a glass fiber cable, thus maximizing data throughput.

UCL also used a wider range of wavelengths than are normally used in fiber optics. Most fiber optic transmission technologies create empty buffers around each light bandwidth being used (much like we do with radio transmissions). The UCL scientists used all of the spectrum, without separation bands, and used several techniques to minimize interference between bands of light.

This short description of the technology being used is not meant to intimidate a non-technical reader, but rather show the level of complexity in today’s fiber optic technology. It’s a technology that we all take for granted, but which is far more complex than most people realize. Fiber optic technology might be the most lab-driven technology in daily use since the technology came from research labs and scientists have been steadily improving the technology for decades.

We’re not going to see multi-terabit lasers in regular use in our networks anytime soon, and that’s not the purpose of this kind of research. UCL says that the most immediate benefit of their research is that they can use some of these same techniques to improve the efficiency of existing fiber repeaters.

Depending upon the kind of glass being used and the spectrum utilized, current long-haul fiber technology requires having the signals amplified every 25 to 60 miles. That means a lot of amplifiers are needed for long-haul fiber routes between cities. Without amplification, the laser light signals get scattered to the point where they can’t be interpreted at the receiving end of the light transmission. As implied by their name, amplifiers boost the power of light signals, but their more important function is to reorder the light signals into the right format to keep the signal coherent.

Each amplification site adds to the latency in long-haul fiber routes since fibers must be spliced into amplifiers and passed through the amplifier electronics. The amplification process also introduces errors into the data stream, meaning some data has to be sent a second time. Each amplifier site must also be in powered and housed in a cooled hut or building. Reducing the number of amplifier sites would reduce the cost and the power requirement and increase the efficiency of long-haul fiber.

Keeping Track of Satellites

The topic of satellite broadband has been heating up lately. Elon Musk’s StarLink now has over 540 broadband satellites in the sky and is talking about starting a few beta tests of the technology with customers. OneWeb went into bankruptcy but it being bought out by a team consisting of the British government and Bharti Airtel, the largest cellular company in India. Jeff Bezos has continued to move forward with Project Kuiper and the FCC recently gave the nod for the company to move ahead.

These companies have grandiose plans to launch large numbers of satellites. Starlink’s first constellation will have over 4,000 satellites – and the FCC has given approval for up to 12,000 satellites. Elon Musk says the company might eventually grow to over 30,000 satellites. Project Kuiper told the FCC they have plans for over 3.300 satellites. The original OneWeb plan called for over 1,200 satellites. Telesat has announced a goal of launching over 500 satellites. A big unknown is Samsung, which announced a plan a year ago to launch over 4,600 satellites. Even if all of these companies don’t fully meet their goals, there are going to be a lot of satellites in the sky over the next decade.

To put these huge numbers into perspective, consider the number of satellites ever shot into space. The United Nations Office for Outer Space Affairs (NOOSA) has been tracking space launches for decades. They reported at the end of 2019 that there have been 8,378 objects put into space since the first Sputnik in 1957. As of the beginning of 2019, there were 4,987 satellites still in orbit, although only 1,957 were still operational.

There is a lot of concern in the scientific community about satellite collisions and space junk. Low earth satellites travel at a speed of about 17,500 miles per hour to maintain orbit. Satellites that collide at that speed create many new pieces of space junk, also traveling at high speed. NASA estimates there are currently over 128 million pieces of orbiting debris smaller than 1 square centimeter, 900,000 objects between 1 and 10 square centimeters, and 22,000 pieces of debris larger than 4 inches.

NASA scientist Donald Kessler described the dangers of space debris in 1978 in what’s now described as the Kessler syndrome. Every space collision creates more debris and eventually there could be a cloud of circling debris that will make it nearly impossible to maintain satellites in space. While scientists think that such a cloud is almost inevitable, some worry that a major collision between two large satellites, or malicious destruction by a bad actor government could accelerate the process and could quickly knock out all of the satellites in a given orbit.

There has only been one known satellite collision when a dead Russian satellite collided with an Iridium communications satellite over a decade ago. That satellite kicked off hundreds of pieces of large debris. There have been numerous near misses, including with the manned Space Station. There was another near-miss in January between the defunct Poppy VII-B military satellite from the 1960s and a retired IRAS satellite that was used for infrared astronomy in the 1980s. It was recently reported that Russia launched a new satellite that passed through one of StarLink’s newly launched swarms.

The key avoiding collisions is to use smart software to track trajectories of satellites and provide ample time for the satellite owners to make corrections to the orbital path to avoid a collision. Historically, that tracking role has been done by the US military – but the Pentagon has made it clear that it is not willing to continue in this role. No software is going to help avoid collisions between dead satellites like the close-call in January. However, all newer satellites should be maneuverable to help avoid collisions as long as sufficient notice is provided.

A few years ago, the White House issued a directive that would give the tracking responsibility to the Commerce Department under a new Office of Space Commerce. However, some in Congress think the proper agency to track satellites is the Federal Aviation Agency which already tracks anything in the sky at lower levels. Somebody in government needs to take on this role soon, because the Pentagon warns that its technology is obsolete, having been in place for thirty years.

The need for tracking is vital. Congress needs to decide soon how this is to be done and provide the funding to implement a new tracking system. It would be ironic if the world solves the rural broadband problem using low orbit satellites, only to see those satellites disappear in a cloud of debris. If the debris cloud is allowed to form it could take centuries for it to dissipate.

An Update on ATSC 3.0

This is the year when we’ll finally start seeing the introduction of ATSC 3.0. This is the newest upgrade to broadcast television and is the first big upgrade since TV converted to all-digital over a decade ago. ATSC 3.0 is the latest standard that’s been released by the Advanced Television Systems Committee that creates the standards used by over-the-air broadcasters.

ATSC 3.0 will bring several upgrades to broadcast television that should make it more competitive with cable company video and Internet-based programming. For example, the new standard will make it possible to broadcast over-the-air in 4K quality. That’s four times more pixels than 1080i TV and rivals the best quality available from Netflix and other online content providers.

ATSC 3.0 also will support the HDR (high dynamic range) protocol that enhances picture quality by creating a better contrast between light and dark parts of a TV screen. ATSC 3.0 also adds additional sound channels to allow for state-of-the-art surround sound.

Earlier this year, Cord Cutters News reported that the new standard was to be introduced in 61 US markets by the end of 2020 – however, that has slowed a bit due to the COVID-19 pandemic. But the new standard should appear in most major markets by sometime in 2021. Homes will either have to buy ATSC-enabled TVs, which are just now hitting the market, or they can buy an external ATSC tuner to get the enhanced signals.

One intriguing aspect of the new standard is that a separate data path is created with TV transmissions. This opens up some interesting new features for broadcast TV. For example, a city could selectively send safety alerts and messages to homes in just certain parts of a city. This also could lead to targeted advertising that is not the same in every part of a market. Local advertisers have often hesitated to advertise on broadcast TV because of the cost and waste of advertising to an entire market instead of just the parts where they sell service.

While still in the early stages of exploration, it’s conceivable that ATSC 3.0 could be used to create a 25 Mbps data transmission path. This might require several stations joining together to create that much bandwidth. While a 25 Mbps data path is no longer a serious competitor of much faster cable broadband speeds, it opens up a lot of interesting possibilities. For example, this bandwidth could offer a competitive alternative for providing data to cellphones and could present a major challenge to cellular carriers and their stingy data caps.

ATSC 3.0 data could also be used to bring broadband into the home of every urban school student. If this broadband was paired with computers for every student, this could go a long way towards solving the homework gap in urban areas. Unfortunately, like most other new technologies, we’re not likely to see the technology in rural markets any time soon, and perhaps never. The broadband signals from tall TV towers will not carry far into rural America.

The FCC voted on June 16 on a few issues related to the ATSC 3.0 standard. In a blow to broadcasters, the FCC decided that TV stations could not use close-by vacant channels to expand ATSC 3.0 capabilities. The FCC instead decided to maintain vacant broadcast channels to be used for white space wireless broadband technology.

The FCC also took a position that isn’t going to sit as well with the public. As homeowners have continued to cut the cord there have been record sales in the last few years of indoor antennas for receiving over-the-air TV. Over-the-air broadcasters are going to be allowed to sunset the older ATSC 1.0 standard in 2023. That means that homes will have to replace TVs or will have to install an external ATSC 3.0 tuner if they want to continue to watch over-the-air broadcasts.

Who Owns Your Connected Device?

It’s been clear for years that IoT companies gather a large amount of data from customers. Everything from a smart thermometer to your new car gathers and reports data back to the cloud. California has tried to tackle customer data privacy through the California Consumer Privacy Act that went into effect on January 1.

Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumers must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data. The new law defines personal data broadly to include things like name, address, online identifiers, IP addresses, email addresses, purchasing history, geolocation data, audio/video data, biometric data, or any effort made to classify customers by personality type or trends.

However, there is one area that the new law doesn’t cover. There are examples over the last few years of IoT companies making devices obsolete and nonfunctional. Two examples that got a lot press involve Charter security systems and Sonos smart speakers.

When Charter purchased Time Warner Cable, the company decided that it didn’t want to support the home security business it had inherited. Charter ended its security business line earlier this year and advised customers that the company would no longer provide alarm monitoring. Unfortunately for customers, this means their security devices become non-functional. Customers probably felt safe in choosing Time Warner Cable as a security company because the company touted that they were using off-the-shelf electronics like Ring cameras and Abode security devices – two of the most common brands of DIY smart devices.

Unfortunately for customers, most of the devices won’t work without being connected to the Charter cloud because the company modified the software to only work in a Charter environment. Customers can connect some of the smart devices like smart thermostats and lights to a different hub, but customers can’t repurpose the security devices, which are the most expensive parts of most systems. When the Charter service ended, homeowners were left with security systems that can’t connect to a monitoring service or law enforcement. Charter’s decision to exit the security business turned the devices into bricks.

In a similar situation, Sonos notified owners of older smart speakers that it will no longer support the devices, meaning no more software upgrades or security upgrades. The older speakers will continue to function but can become vulnerable to hackers. Sonos offered owners of the older speakers a 30% discount on newer speakers.

It’s not unusual for older electronics to become obsolete and to no longer be serviced by the manufacturer – it’s something we’re familiar with in the telecom industry. What is unusual is that Sonos told customers that they cannot sell their older speakers without permission from the company. Sonos has this ability because the speakers communicate with the Sonos cloud. Sonos is not going to allow the old speakers to be registered by somebody else. If I was a Sonos customer I would also assume this to mean that the company is likely to eventually block old speakers from their cloud. The company’s notification told customers that their speakers are essentially a worthless brick. This is a shock to folks who spent a lot of money on top-of-the-line speakers.

There are numerous examples of similar incidents in the smart device industry. Google shut down the Revolv smart hub in 2016, making the device unusable. John Deere has the ability to shut off farm equipment costing hundreds of thousands of dollars if farmers use somebody other than John Deere for service. My HP printer gave me warnings that the printer would stop working if I didn’t purchase an HP ink-replacement plan.

This raises the question if consumers really own a device if the manufacturer or some partner of the manufacturer has the ability at some future time to shut the device down. Unfortunately, when consumers buy smart devices they never get any warning of the rights of the manufacturer to kill the devices in the future.

I’m sure the buyers of the Sonos speakers feel betrayed. People likely expect decent speakers to last for decades. I have a hard time imagining somebody taking Sonos up on the offer to buy new speakers at a discount to replace the old ones because in a few years the company is likely to obsolete the new speakers as well. We all have gotten used to the idea of planned obsolescence. Microsoft stops supporting older versions of Windows and users continue to use the older software at their risk. But Microsoft doesn’t shut down computers running old versions of Windows as Charter is doing. Microsoft doesn’t stop a customer from selling a computer loaded with Windows 5 to somebody else, as Sonos is doing.

These two examples provide a warning to consumers that smart devices might come with an expiration date. Any device that continues to interface with the original manufacturer through the cloud can be shut down. It would be an interesting lawsuit if a Sonos customer sues the company for essentially stealing their device.

It’s inevitable that devices grow obsolete over time. Sonos says the older speakers don’t contain enough memory to accept software updates. That’s probably true, but the company went way over the line when they decided to kill old speakers rather than let somebody sell them. Their actions tell customers that they were only renting the speakers and that they always belonged to Sonos.

The Evolution of 5G

Technology always evolves and I’ve been reading about where scientists envision the evolution of 5G. The first generation of 5G, which will be rolled out over the next 3-5 years, is mostly aimed at increasing the throughput of cellular networks. According to Cisco, North American cellular data volumes are growing at a torrid 36% per year, and even faster than that in some urban markets where the volumes of data are doubling every two years. The main goal of first-generation 5G is to increase network capacity to handle that growth.

However, if 5G is deployed only for that purpose we won’t see the giant increases in speed that the public thinks is coming with 5G. Cisco is predicting that the average North American cellular speed in 2026 will be around 70 Mbps – a far cry from the gigabit speed predictions you can find splattered all over the press.

There is already academic and lab work looking into what is being labeled as 6G. That will use terabit spectrum and promises to potentially be able to deliver wireless speeds up to as much as 1 terabit per second. I’ve already seen a few articles touting this as a giant breakthrough, but the articles didn’t mention that the effective distance for this spectrum can be measured in a few feet – this will be an indoor technology and will not be the next cellular replacement for 5G.

This means that to some degree, 5G is the end of the line in terms of cellular delivery. This is likely why the cellular carriers are gobbling up as much spectrum as they can. That spectrum isn’t all needed today but will be needed by the end of the decade. The cellular carriers will use every spectrum block now to preserve the licenses, but the heavy lifting for most of the spectrum being purchased today will come into play a decade or more from now – the carriers are playing the long game so that they aren’t irrelevant in the not-too-distant future

This doesn’t mean that 5G is a dead-end, and the technology will continue to evolve. Here are a few of the ideas being explored in labs today that will enhance 5G performance a decade from now:

  • Large Massive Network MIMO. This means expanding the density and capacity of cellular antennas to simultaneously be able to handle multiple spectrum bands. We need much better antennas if we are to get vastly greater data volumes into and out of cellular devices. For now, data speeds on cellphones are being limited by the capacity of the antennas.
  • Ultra Dense Networks (UDN). This envisions the end of cell sites in the way we think about them today. This would come first in urban networks where there will be a hyper-dense deployment of small cell devices that would likely also incorporate small cells, WiFi routers, femtocells, and M2M gateways. In such an environment, cellphones can interact with the cloud rather than with a traditional cell site. This eliminates the traditional cellular standard of one cell site controlling a transaction. In a UDN network, a cellular device could connect anywhere.
  • Device-to-Device (D2D) Connectivity. The smart 5G network in the future will let nearby devices communicate with each other without having to pass traffic back and forth to a data hub. This would move some cellular transactions to the edge, and would significantly reduce logjams at data centers and on middle-mile fiber routes.
  • A Machine-to-Machine (M2M) Layer. A huge portion of future web traffic will be communications between devices and the cloud. This research envisions a separate cellular network for such traffic that maximizes M2M communications separately from traffic used by people.
  • Use of AI. Smart networks will be able to shift and react to changing demands and will be able to shuffle and share network resources as needed. For example, if there is a street fair in a neighborhood that is usually vehicle traffic, the network would smartly reconfigure to recognize the changing demand for connectivity.
  • Better Batteries. None of the improvements come along until there are better ‘lifetime’ batteries that can allow devices to use more antennas and process more data.

Wireless marketing folks will be challenged to find ways to describe these future improvements in the 5G network. If the term 6G becomes associated with terabit spectrum, marketers are going to find something other than a ‘G’ term to over-hype the new technologies.

Are You Ready for 400 Gb?

AT&T recently activated a 400-gigabit fiber connection between Dallas and Atlanta and claimed it is the first such connection in the country. This is a milestone because it represents a  major upgrade in fiber speeds in our networks. While scientists in the labs have created multi-terabyte lasers, our fiber network backbones for the last decade have mostly relied on 100-gigabit or slower laser technology.

Broadband demand has grown by a huge amount over the last decade. We’ve seen double-digit annual growth in residential broadband, business broadband, cellular data, and machine-to-machine data traffic. Our backbone and transport networks are busy and often full. AT&T says it’s going to need the faster fiber transport to accommodate 5G, gaming, and ever-growing video traffic volumes.

I’ve heard concerns from network engineers that some of our long-haul fiber routes, such as the ones along the east coast are overloaded and in danger of being swamped. Having the ability to update long-haul fiber routes from 100 Gb to 400 Gb is a nice upgrade – but not as good as you might imagine. If a 100 Gb fiber route is nearly full and is upgraded to 400 Gb, the life of that route is only stretched another six years if network traffic volumes are doubling every three years. But upgrading is a start and a stopgap measure.

AT&T is also touting that they used white box hardware for this new deployment. White box hardware uses inexpensive generic switches and routers controlled by open-source software. AT&T is likely replacing a 100 Gb traditional electronics route with a much cheaper white box solution. Folks who don’t work with long-haul networks probably don’t realize the big cost of electronics needed to light a long fiber route like this one between Dallas and Atlanta. Long-haul fiber requires numerous heated and cooled huts placed along the route that house repeaters needed to amplify the signal. A white box solution doesn’t just mean less expensive lasers at the end points, but at all of the intermediate points along the fiber route.

AT&T views 400 Gb transport as the next generation of technology needed in our networks and the company submitted specifications to the Open Compute Project for an array of different 400 GB chassis and backbone fabrics. The AT&T specifications rely on Broadcom’s Jericho2 family of chips.

100 Gb electronics are not only used today in long-haul data routes. I have a lot of clients that operate fiber-to-the-home networks that use a 100 Gb backbone to provide the bandwidth to reach multiple neighborhoods. In local networks that are fiber-rich there is always a trade-off between the cost up upgrading to faster electronics or instead lighting additional fiber pairs. As an existing 100 Gb fiber starts getting full, network engineers will consider the cost of lighting a second 100 Gb route versus upgrading to the 400 Gb technology. The fact that AT&T is pushing this as a white box solution likely means that it will be cheaper to upgrade to a new 400 Gb network than it is to buy a second traditional 100 Gb set of electronics.

There are other 400 Gb solution hitting the market from Cisco, Juniper, and Arista Networks – but all will be more expensive than a white box solution. Network engineers always talk about chokepoints in a network – places where the traffic volume exceeds the network capability. One of the most worrisome chokepoints for ISPs are the long-haul fiber networks that connect communities – because those routes are out of the control of the last-mile ISP. It’s reassuring to know there are technology upgrades that will let the industry keep up with demand.

Expect a New Busy Hour

One of the many consequences of the coronavirus is that networks are going to see a shift in busy hour traffic. Busy hour traffic is just what is sounds like – it’s the time of the day when a network is busiest, and network engineers design networks to accommodate the expected peak amount of bandwidth usage.

Verizon reported on March 18 that in the week since people started moving to work from home that they’ve seen a 20% overall increase in broadband traffic. Verizon says that gaming traffic is up 75% as those stuck at home are turning to gaming for entertainment. They also report that VPN (virtual private network) traffic is up 34%. A lot of connections between homes and corporate and school WANs are using a VPN.

These are the kind of increases that can scare network engineers, because Verizon just saw a typical year’s growth in traffic happen in a week. Unfortunately, the announced Verizon traffic increases aren’t even the whole story since we’re just at the beginning of the response to the coronavirus. There are still companies figuring out how to give secure access to company servers and the work-from-home traffic is bound to grow in the next few weeks. I think we’ll see a big jump in video conference traffic on platforms like Zoom as more meeting move online as an alternative to live meetings.

For most of my clients, the busy hour has been in the evening when many homes watch video or play online games. The new paradigm has to be scaring network engineers. There is now likely going to be a lot of online video watching and gaming during the daytime in addition to the evening. The added traffic for those working from home is probably the most worrisome traffic since a VPN connection to a corporate WAN will tie up a dedicated path through the Internet backbone – bandwidth that isn’t shared with others. We’ve never worried about VPN traffic when it was a small percentage of total traffic – but it could become one of the biggest continual daytime uses of bandwidth. All of the work that used to occur between employees and the corporate server inside of the business is now going to traverse the Internet.

I’m sure network engineers everywhere are keeping an eye on the changing traffic, particularly to the amount of broadband used during the busy hour. There are a few ways that the busy hour impacts an ISP. First, they must buy enough bandwidth to the Internet to accommodate everybody. It’s typical to buy at least 15% to 20% more bandwidth than is expected for the busy hour. If the size of the busy hour shoots higher, network engineers are going to have to quickly buy a larger pipe to the Internet, or else customer performance will suffer.

Network engineers also keep a close eye on their network utilization. For example, most networks operate with some rule of thumb, such as it’s time to upgrade electronics when any part of the network hits some pre-determined threshold like 85% utilization. These rules of thumb have been developed over the years as warning signs to provide time to make upgrades.

The explosion of traffic due to the coronavirus, might shoot many networks past these warning signs and networks start experiencing chokepoints that weren’t anticipated just a few weeks earlier. Most networks have numerous possible chokepoints – and each is monitored. For example, there is usually a chokepoint going into neighborhoods. There are often chokepoints on fiber rings. There might be chokepoints on switch and router capacity at the network hub. There can be the chokepoint on the data pipe going to the world. If any one part of the network gets overly busy, then network performance can degrade quickly.

What is scariest for network engineers is that traffic from the reaction to the coronavirus is being layered on top of networks that already have been experiencing steady growth. Most of my clients have been seeing year-over-year traffic volumes increases of 20% to 30%. If Verizon’s experience in indicative of what we’ll all see, then networks will see a year’s typical growth happen in just weeks. We’ve never experienced anything like this, and I’m guessing there aren’t a lot of network engineers who are sleeping well this week.

Introducing 6 GHz into WiFi

WiFi is already the most successful deployment of spectrum ever. In the recent Annual Internet Report, Cisco predicted that by 2022 that WiFi will cross the threshold and will carry more than 50% of global IP traffic. Cisco predicts by 2023 that there will be 628 million WiFi hotspots – most used for home broadband.

These are amazing statistics when you consider that WiFi has been limited to using 70 MHz of spectrum in the 2.4 GHz spectrum band and 500 MHz in the 5 GHz spectrum band. That’s all about to change as two major upgrades are being made to WiFi – the upgrade to WiFi 6 and the integration 6 GHz spectrum into WiFi.

The Impact of WiFi 6. WiFi 6 is the new consumer-friendly name given to the next generation of WiFi technology (replaces the term 802.11ax). Even without the introduction of new spectrum WiFi 6 will significantly improve performance over WiFi 5 (802.11ac).

The problem with current WiFi is congestion. Congestion comes in two ways – from multiple devices trying to use the same router, and from multiple routers trying to use the same channels. My house is probably typical, and we have a few dozen devices that can use the WiFi router. My wife’s Subaru even connects to our network to check for updates every time she pulls into the driveway. With only two of us in the house, we don’t overtax our router – but we can when my daughter is home from college.

Channel congestion is the real culprit in our neighborhood. We live in a moderately dense neighborhood of single-family homes and we can all see multiple WiFi networks. I just looked at my computer and I see 24 other WiFi networks, including the delightfully named ‘More Cowbell’ and ‘Very Secret CIA Network’. All of these networks are using the same small number of channels, and WiFi pauses whenever it sees a demand for bandwidth from any of these networks.

Both kinds of congestion slow down throughput due to the nature of the WiFi specification. The demands for routers and for channels are queued and each device has to wait its turn to transmit or receive data. Theoretically, a WiFi network can transmit data quickly by grabbing a full channel – but that rarely happens. The existing 5 GHz band has six 80-MHz and two 160-MHz channels available. A download of a big file could go quickly if a full channel could be used for the purpose. However, if there are overlapping demands for even a portion of a channel then the whole channel is not assigned for a specific task.

Wi-Fi 6 introduces a few major upgrades in the way that WiFi works to decrease congestion. The first is the introduction of orthogonal frequency-division multiple access (OFDMA). This technology allows devices to transmit simultaneously rather than wait for a turn in the queue. OFDMA divides channels into smaller sub-channels called resource units. The analogy used in the industry is that this will open WiFi from a single-lane technology to a multi-lane freeway. WiFi 6 also uses other techniques like improved beamforming to make a focused connection to a specific device, which lowers the chances of interference from other devices.

The Impact of 6 GHz. WiFi performance was already getting a lot better due to WiFi 6 technology. Adding the 6 GHz spectrum will drive performance to yet another level. The 6GHz spectrum adds seven 160 MHz channels to the WiFi environment (or alternately adds fifty-nine 20 MHz channels. For the typical WiFi environment, such as a home in an urban setting, this is enough new channels that a big bandwidth demand ought to be able to grab a full 160 MHz channel. This is going to increase the perceived speeds of WiFi routers significantly.

When the extra bandwidth is paired with OFDMA technology, interference ought to be a thing of the past, except perhaps in super-busy environments like a business hotel or a stadium. Undoubtedly, we’ll find ways over the next decade to fill up WiFi 6 routers and we’ll eventually be begging the FCC for even more WiFi spectrum. But for now, this should solve WiFi interference in all but the toughest WiFi environments.

It’s worth a word of caution that this improvement isn’t going to happen overnight. You need both a WiFi 6 router and WiFi-capable devices to take advantage of the new WiFi 6 technology. You’ll also need devices capable of using the 6 GHz spectrum. Unless you’re willing to throw away every WiFi device in your home and start over, it’s going to take most homes years to migrate into the combined benefits of WiFi 6 and 6 GHz spectrum.

There is No Artificial Intelligence

It seems like most new technology today comes with a lot of hype. Just a few years ago, the press was full of predictions that we’d be awash with Internet of Thing sensors that would transform the way we live. We’ve heard similar claims for technologies like virtual reality, block chain, and self-driving cars. I’ve written a lot about the massive hype surrounding 5G – in my way of measuring things, there isn’t any 5G in the world yet, but the cellular carriers are loudly proclaiming its everywhere.

The other technology with a hype that nearly equals 5G is artificial intelligence. I see articles every day talking about the ways that artificial intelligence is already changing our world, with predictions about the big changes on the horizon due to AI. A majority of large corporations claim to now be using AI. Unfortunately, this is all hype and there is no artificial intelligence today, just like there is not yet any 5G.

It’s easy to understand what real 5G will be like – it will include the many innovations embedded in the 5G specifications like frequency slicing and dynamic spectrum sharing. We’ll finally have 5G when a half dozen new 5G technologies are on my phone. Defining artificial intelligence is harder because there is no specification for AI. Artificial intelligence will be here when a computer can solve problems in much the way that humans do. Our brains evaluate available data on hand to see if we know enough to solve a problem. If not, we seek the additional data we need. Our brains can consider data from disparate and unrelated sources to solve problems. There is no computer today that is within a light-year of that ability – there are not yet any computers that can ask for specific additional data needed to solve a problem. An AI computer doesn’t need to be self-aware – it just has to be able to ask the questions and seek the right data needed to solve a given problem.

We use computer tools today that get labeled as artificial intelligence such as complex algorithms, machine learning, and deep learning. We’ve paired these techniques with faster and larger computers (such as in data centers) to quickly process vast amounts of data.

One of the techniques we think of artificial intelligence is nothing more than using brute force to process large amounts of data. This is how IBM’s Deep Blue works. It can produce impressive results and shocked the world in 1997 when the computer was able to beat Garry Kasparov, the world chess champion. Since then, the IBM Watson system has beat the best Jeopardy players and is being used to diagnose illnesses. These computers achieve their results through processing vast amounts of data quickly. A chess computer can consider huge numbers of possible moves and put a value on the ones with the best outcome. The Jeopardy computer had massive databases of human knowledge available like Wikipedia and Google search – it looks up the answer to a question faster than a human mind can pull it out of memory.

Much of what is thought of as AI today uses machine learning. Perhaps the easiest way to describe machine learning is with an example. Machine learning uses complex algorithms to analyze and rank data. Netflix uses machine learning to suggest shows that it thinks a given customer will like. Netflix knows what a viewer has already watched. Netflix also knows what millions of others who watch the same shows seem to like, and it looks at what those millions of others watched to make a recommendation. The algorithm is far from perfect because the data set of what any individual viewer has watched is small. I know in my case, I look at the shows recommended for my wife and see all sorts of shows that interest me, but which I am not offered. This highlights one of the problems of machine learning – it can easily be biased and draw wrong conclusions instead of right ones. Netflix’s suggestion algorithm can become a self-fulfilling prophecy unless a viewer makes the effort to look outside of the recommended shows – the more a viewer watches what is suggested, the more they are pigeonholed into a specific type of content.

Deep learning is a form of machine learning that can produce better results by passing data through multiple algorithms. For example, there are numerous forms of English spoken around the world. A customer service bot can begin each conversation in standard English, and then use layered algorithms to analyze the speaker’s dialect to switch to more closely match a given speaker.

I’m not implying that today’s techniques are not worthwhile. They are being used to create numerous automated applications that could not be done otherwise. However, almost every algorithm-based technique in use today will become instantly obsolete when a real AI is created.

I’ve read several experts that predict that we are only a few years away from an AI desert – meaning that we will have milked about all that can be had out of machine learning and deep learning. Developments with those techniques are not leading towards a breakthrough to real AI – machine learning is not part of the evolutionary path to AI. At least for today, both AI and 5G are largely non-existent, and the things passed off as these two technologies are pale versions of the real thing.