Are You Ready for 400 Gb?

AT&T recently activated a 400-gigabit fiber connection between Dallas and Atlanta and claimed it is the first such connection in the country. This is a milestone because it represents a  major upgrade in fiber speeds in our networks. While scientists in the labs have created multi-terabyte lasers, our fiber network backbones for the last decade have mostly relied on 100-gigabit or slower laser technology.

Broadband demand has grown by a huge amount over the last decade. We’ve seen double-digit annual growth in residential broadband, business broadband, cellular data, and machine-to-machine data traffic. Our backbone and transport networks are busy and often full. AT&T says it’s going to need the faster fiber transport to accommodate 5G, gaming, and ever-growing video traffic volumes.

I’ve heard concerns from network engineers that some of our long-haul fiber routes, such as the ones along the east coast are overloaded and in danger of being swamped. Having the ability to update long-haul fiber routes from 100 Gb to 400 Gb is a nice upgrade – but not as good as you might imagine. If a 100 Gb fiber route is nearly full and is upgraded to 400 Gb, the life of that route is only stretched another six years if network traffic volumes are doubling every three years. But upgrading is a start and a stopgap measure.

AT&T is also touting that they used white box hardware for this new deployment. White box hardware uses inexpensive generic switches and routers controlled by open-source software. AT&T is likely replacing a 100 Gb traditional electronics route with a much cheaper white box solution. Folks who don’t work with long-haul networks probably don’t realize the big cost of electronics needed to light a long fiber route like this one between Dallas and Atlanta. Long-haul fiber requires numerous heated and cooled huts placed along the route that house repeaters needed to amplify the signal. A white box solution doesn’t just mean less expensive lasers at the end points, but at all of the intermediate points along the fiber route.

AT&T views 400 Gb transport as the next generation of technology needed in our networks and the company submitted specifications to the Open Compute Project for an array of different 400 GB chassis and backbone fabrics. The AT&T specifications rely on Broadcom’s Jericho2 family of chips.

100 Gb electronics are not only used today in long-haul data routes. I have a lot of clients that operate fiber-to-the-home networks that use a 100 Gb backbone to provide the bandwidth to reach multiple neighborhoods. In local networks that are fiber-rich there is always a trade-off between the cost up upgrading to faster electronics or instead lighting additional fiber pairs. As an existing 100 Gb fiber starts getting full, network engineers will consider the cost of lighting a second 100 Gb route versus upgrading to the 400 Gb technology. The fact that AT&T is pushing this as a white box solution likely means that it will be cheaper to upgrade to a new 400 Gb network than it is to buy a second traditional 100 Gb set of electronics.

There are other 400 Gb solution hitting the market from Cisco, Juniper, and Arista Networks – but all will be more expensive than a white box solution. Network engineers always talk about chokepoints in a network – places where the traffic volume exceeds the network capability. One of the most worrisome chokepoints for ISPs are the long-haul fiber networks that connect communities – because those routes are out of the control of the last-mile ISP. It’s reassuring to know there are technology upgrades that will let the industry keep up with demand.

Expect a New Busy Hour

One of the many consequences of the coronavirus is that networks are going to see a shift in busy hour traffic. Busy hour traffic is just what is sounds like – it’s the time of the day when a network is busiest, and network engineers design networks to accommodate the expected peak amount of bandwidth usage.

Verizon reported on March 18 that in the week since people started moving to work from home that they’ve seen a 20% overall increase in broadband traffic. Verizon says that gaming traffic is up 75% as those stuck at home are turning to gaming for entertainment. They also report that VPN (virtual private network) traffic is up 34%. A lot of connections between homes and corporate and school WANs are using a VPN.

These are the kind of increases that can scare network engineers, because Verizon just saw a typical year’s growth in traffic happen in a week. Unfortunately, the announced Verizon traffic increases aren’t even the whole story since we’re just at the beginning of the response to the coronavirus. There are still companies figuring out how to give secure access to company servers and the work-from-home traffic is bound to grow in the next few weeks. I think we’ll see a big jump in video conference traffic on platforms like Zoom as more meeting move online as an alternative to live meetings.

For most of my clients, the busy hour has been in the evening when many homes watch video or play online games. The new paradigm has to be scaring network engineers. There is now likely going to be a lot of online video watching and gaming during the daytime in addition to the evening. The added traffic for those working from home is probably the most worrisome traffic since a VPN connection to a corporate WAN will tie up a dedicated path through the Internet backbone – bandwidth that isn’t shared with others. We’ve never worried about VPN traffic when it was a small percentage of total traffic – but it could become one of the biggest continual daytime uses of bandwidth. All of the work that used to occur between employees and the corporate server inside of the business is now going to traverse the Internet.

I’m sure network engineers everywhere are keeping an eye on the changing traffic, particularly to the amount of broadband used during the busy hour. There are a few ways that the busy hour impacts an ISP. First, they must buy enough bandwidth to the Internet to accommodate everybody. It’s typical to buy at least 15% to 20% more bandwidth than is expected for the busy hour. If the size of the busy hour shoots higher, network engineers are going to have to quickly buy a larger pipe to the Internet, or else customer performance will suffer.

Network engineers also keep a close eye on their network utilization. For example, most networks operate with some rule of thumb, such as it’s time to upgrade electronics when any part of the network hits some pre-determined threshold like 85% utilization. These rules of thumb have been developed over the years as warning signs to provide time to make upgrades.

The explosion of traffic due to the coronavirus, might shoot many networks past these warning signs and networks start experiencing chokepoints that weren’t anticipated just a few weeks earlier. Most networks have numerous possible chokepoints – and each is monitored. For example, there is usually a chokepoint going into neighborhoods. There are often chokepoints on fiber rings. There might be chokepoints on switch and router capacity at the network hub. There can be the chokepoint on the data pipe going to the world. If any one part of the network gets overly busy, then network performance can degrade quickly.

What is scariest for network engineers is that traffic from the reaction to the coronavirus is being layered on top of networks that already have been experiencing steady growth. Most of my clients have been seeing year-over-year traffic volumes increases of 20% to 30%. If Verizon’s experience in indicative of what we’ll all see, then networks will see a year’s typical growth happen in just weeks. We’ve never experienced anything like this, and I’m guessing there aren’t a lot of network engineers who are sleeping well this week.

Introducing 6 GHz into WiFi

WiFi is already the most successful deployment of spectrum ever. In the recent Annual Internet Report, Cisco predicted that by 2022 that WiFi will cross the threshold and will carry more than 50% of global IP traffic. Cisco predicts by 2023 that there will be 628 million WiFi hotspots – most used for home broadband.

These are amazing statistics when you consider that WiFi has been limited to using 70 MHz of spectrum in the 2.4 GHz spectrum band and 500 MHz in the 5 GHz spectrum band. That’s all about to change as two major upgrades are being made to WiFi – the upgrade to WiFi 6 and the integration 6 GHz spectrum into WiFi.

The Impact of WiFi 6. WiFi 6 is the new consumer-friendly name given to the next generation of WiFi technology (replaces the term 802.11ax). Even without the introduction of new spectrum WiFi 6 will significantly improve performance over WiFi 5 (802.11ac).

The problem with current WiFi is congestion. Congestion comes in two ways – from multiple devices trying to use the same router, and from multiple routers trying to use the same channels. My house is probably typical, and we have a few dozen devices that can use the WiFi router. My wife’s Subaru even connects to our network to check for updates every time she pulls into the driveway. With only two of us in the house, we don’t overtax our router – but we can when my daughter is home from college.

Channel congestion is the real culprit in our neighborhood. We live in a moderately dense neighborhood of single-family homes and we can all see multiple WiFi networks. I just looked at my computer and I see 24 other WiFi networks, including the delightfully named ‘More Cowbell’ and ‘Very Secret CIA Network’. All of these networks are using the same small number of channels, and WiFi pauses whenever it sees a demand for bandwidth from any of these networks.

Both kinds of congestion slow down throughput due to the nature of the WiFi specification. The demands for routers and for channels are queued and each device has to wait its turn to transmit or receive data. Theoretically, a WiFi network can transmit data quickly by grabbing a full channel – but that rarely happens. The existing 5 GHz band has six 80-MHz and two 160-MHz channels available. A download of a big file could go quickly if a full channel could be used for the purpose. However, if there are overlapping demands for even a portion of a channel then the whole channel is not assigned for a specific task.

Wi-Fi 6 introduces a few major upgrades in the way that WiFi works to decrease congestion. The first is the introduction of orthogonal frequency-division multiple access (OFDMA). This technology allows devices to transmit simultaneously rather than wait for a turn in the queue. OFDMA divides channels into smaller sub-channels called resource units. The analogy used in the industry is that this will open WiFi from a single-lane technology to a multi-lane freeway. WiFi 6 also uses other techniques like improved beamforming to make a focused connection to a specific device, which lowers the chances of interference from other devices.

The Impact of 6 GHz. WiFi performance was already getting a lot better due to WiFi 6 technology. Adding the 6 GHz spectrum will drive performance to yet another level. The 6GHz spectrum adds seven 160 MHz channels to the WiFi environment (or alternately adds fifty-nine 20 MHz channels. For the typical WiFi environment, such as a home in an urban setting, this is enough new channels that a big bandwidth demand ought to be able to grab a full 160 MHz channel. This is going to increase the perceived speeds of WiFi routers significantly.

When the extra bandwidth is paired with OFDMA technology, interference ought to be a thing of the past, except perhaps in super-busy environments like a business hotel or a stadium. Undoubtedly, we’ll find ways over the next decade to fill up WiFi 6 routers and we’ll eventually be begging the FCC for even more WiFi spectrum. But for now, this should solve WiFi interference in all but the toughest WiFi environments.

It’s worth a word of caution that this improvement isn’t going to happen overnight. You need both a WiFi 6 router and WiFi-capable devices to take advantage of the new WiFi 6 technology. You’ll also need devices capable of using the 6 GHz spectrum. Unless you’re willing to throw away every WiFi device in your home and start over, it’s going to take most homes years to migrate into the combined benefits of WiFi 6 and 6 GHz spectrum.

There is No Artificial Intelligence

It seems like most new technology today comes with a lot of hype. Just a few years ago, the press was full of predictions that we’d be awash with Internet of Thing sensors that would transform the way we live. We’ve heard similar claims for technologies like virtual reality, block chain, and self-driving cars. I’ve written a lot about the massive hype surrounding 5G – in my way of measuring things, there isn’t any 5G in the world yet, but the cellular carriers are loudly proclaiming its everywhere.

The other technology with a hype that nearly equals 5G is artificial intelligence. I see articles every day talking about the ways that artificial intelligence is already changing our world, with predictions about the big changes on the horizon due to AI. A majority of large corporations claim to now be using AI. Unfortunately, this is all hype and there is no artificial intelligence today, just like there is not yet any 5G.

It’s easy to understand what real 5G will be like – it will include the many innovations embedded in the 5G specifications like frequency slicing and dynamic spectrum sharing. We’ll finally have 5G when a half dozen new 5G technologies are on my phone. Defining artificial intelligence is harder because there is no specification for AI. Artificial intelligence will be here when a computer can solve problems in much the way that humans do. Our brains evaluate available data on hand to see if we know enough to solve a problem. If not, we seek the additional data we need. Our brains can consider data from disparate and unrelated sources to solve problems. There is no computer today that is within a light-year of that ability – there are not yet any computers that can ask for specific additional data needed to solve a problem. An AI computer doesn’t need to be self-aware – it just has to be able to ask the questions and seek the right data needed to solve a given problem.

We use computer tools today that get labeled as artificial intelligence such as complex algorithms, machine learning, and deep learning. We’ve paired these techniques with faster and larger computers (such as in data centers) to quickly process vast amounts of data.

One of the techniques we think of artificial intelligence is nothing more than using brute force to process large amounts of data. This is how IBM’s Deep Blue works. It can produce impressive results and shocked the world in 1997 when the computer was able to beat Garry Kasparov, the world chess champion. Since then, the IBM Watson system has beat the best Jeopardy players and is being used to diagnose illnesses. These computers achieve their results through processing vast amounts of data quickly. A chess computer can consider huge numbers of possible moves and put a value on the ones with the best outcome. The Jeopardy computer had massive databases of human knowledge available like Wikipedia and Google search – it looks up the answer to a question faster than a human mind can pull it out of memory.

Much of what is thought of as AI today uses machine learning. Perhaps the easiest way to describe machine learning is with an example. Machine learning uses complex algorithms to analyze and rank data. Netflix uses machine learning to suggest shows that it thinks a given customer will like. Netflix knows what a viewer has already watched. Netflix also knows what millions of others who watch the same shows seem to like, and it looks at what those millions of others watched to make a recommendation. The algorithm is far from perfect because the data set of what any individual viewer has watched is small. I know in my case, I look at the shows recommended for my wife and see all sorts of shows that interest me, but which I am not offered. This highlights one of the problems of machine learning – it can easily be biased and draw wrong conclusions instead of right ones. Netflix’s suggestion algorithm can become a self-fulfilling prophecy unless a viewer makes the effort to look outside of the recommended shows – the more a viewer watches what is suggested, the more they are pigeonholed into a specific type of content.

Deep learning is a form of machine learning that can produce better results by passing data through multiple algorithms. For example, there are numerous forms of English spoken around the world. A customer service bot can begin each conversation in standard English, and then use layered algorithms to analyze the speaker’s dialect to switch to more closely match a given speaker.

I’m not implying that today’s techniques are not worthwhile. They are being used to create numerous automated applications that could not be done otherwise. However, almost every algorithm-based technique in use today will become instantly obsolete when a real AI is created.

I’ve read several experts that predict that we are only a few years away from an AI desert – meaning that we will have milked about all that can be had out of machine learning and deep learning. Developments with those techniques are not leading towards a breakthrough to real AI – machine learning is not part of the evolutionary path to AI. At least for today, both AI and 5G are largely non-existent, and the things passed off as these two technologies are pale versions of the real thing.

5G and Rural America

FCC Chairman Ajit Pai recently told the crowd at CES that 5G would be a huge benefit to rural America and would help to close the rural broadband divide. I have to imagine he’s saying this to keep rural legislators on board to support that FCC’s emphasis on promoting 5G. I’ve thought hard about the topic and I have a hard time seeing how 5G will make much difference in rural America – particularly with broadband.

There is more than one use of 5G, and I’ve thought through each one of them. Let me start with 5G cellular service. The major benefits of 5G cellular are that a cell site will be able to handle up to 100,000 simultaneous connection per cell site. 5G also promises slightly faster cellular data speeds. The specification calls for speeds up to 100 Mbps with the normal cellular frequencies – which happens to also have been the specification for 4G, although it was never realized.

I can’t picture a scenario where a rural cell site might need 100,000 simultaneous connections within a circle of a few miles. There aren’t many urban places that need that many connections today other than stadiums and other crowded locations where a lot of people want connectivity at the same time. I’ve heard farm sensors mentioned as a reason for needing 5G, but I don’t buy it. The normal crop sensor might dribble out tiny amounts of data a few times per day. These sensors cost close to $1,000 today, but even if they somehow get reduced to a cost of pennies it’s hard to imagine a situation where any given rural cell site is going to need to more capacity than is available with 4G.

It’s great if rural cell sites get upgraded, but there can’t be many rural cell sites that are overloaded enough to demand 5G. There is also the economics. It’s hard to imagine the cellular carriers being willing to invest in a rural cell site that might support only a few farmers – and it’s hard to think the farmers are willing to pay enough to justify their own cell site

There has also been talk of lower frequencies benefitting rural America, and there is some validity to that. For example, T-Mobile’s 600 MHz frequency travels farther and penetrates obstacles better than higher frequencies. Using this frequency might extend good cellular data coverage as much as an extra mile and might support voice for several additional miles from a cell site. However, low frequencies don’t require 5G to operate. There is nothing stopping these carriers from introducing low frequencies with 4G (and in fact, that’s what they have done in the first-generation cellphones capable of using the lower frequencies). The cellular carriers are loudly claiming that their introduction of new frequencies is the same thing as 5G – it’s not.

5G can also be used to provide faster data using millimeter wave spectrum. The big carriers are all deploying 5G hot spots with millimeter wave technology in dense urban centers. This technology broadcasts super-fast broadband for up to 1,000 feet.  The spectrum is also super-squirrely in that it doesn’t pass through anything, even a pane of glass. Try as I might, I can’t find a profitable application for this technology in suburbs, let alone rural places. If a farmer wants fast broadband in the barnyard I suspect we’re only a few years away from people being able to buy a 5G/WiFi 6 hot spot that could satisfy this purpose without paying a monthly fee to a cellular company.

Finally, 5G can be used to provide gigabit wireless loops from a fiber network. This is the technology trialed by Verizon in a few cities like Sacramento. In that trial, speeds were about 300 Mbps, but there are no reason speeds can’t climb to a gigabit. For this technology to work there has to be a transmitter on fiber within 1,000 feet of a customer. It seems unlikely to me that somebody spending the money to get fiber close to farms would use electronics for the last few hundred feet instead of a fiber drop. The electronics are always going to have problems and require truck rolls, and the electronics will likely have to be replaced at least once per decade. The small telcos and electric coops I know would scoff at the idea of adding another set of electronics into a rural fiber network.

I expect some of the 5G benefits to find uses in larger county seats – but those towns have the same characteristics as suburbia. It’s hard to think that rural America outside of county seats will ever need 5G.

I’m at a total loss of why Chairman Pai and many politicians keep extolling the virtues of rural 5G. I have no doubt that rural cell sites will be updated to 5G over time, but the carriers will be in no hurry to do so. It’s hard to find situations in rural America that demand a 5G solution that can’t be done with 4G – and it’s even harder to justify the cost of 5G upgrades that benefit only a few customers. I can’t find a business case, or even an engineering case for pushing 5G into rural America. I most definitely can’t foresee a 5G application that will solve the rural broadband divide.

 

Is 5G Radiation Safe?

There is a lot of public sentiment against placing small cell sites on residential streets. There is a particular fear of broadcasting higher millimeter wave frequencies near to homes since these frequencies have never been in widespread use before. In the public’s mind, higher frequencies mean a higher danger of health problems related to exposure to radiofrequency emissions. The public’s fears are further stoked when they hear that Switzerland and Belgium are limiting the deployment of millimeter wave radios until there is better proof that they are safe.

The FCC released a report and order on December 4 that is likely to add fuel to the fire. The agency rejected all claims that there is any public danger from radiofrequency emissions and affirmed the existing frequency exposure rules. The FCC said that none of the thousand filings made in the docket provided any scientific evidence that millimeter wave, and other 5G frequencies are dangerous.

The FCC is right in their assertion that there are no definitive scientific studies linking cellular frequencies to cancer or other health issues. However, the FCC misses the point that most of those asking for caution, including scientists, agree with that. The public has several specific fears about the new frequencies being used:

  • First is the overall range of new frequencies. In the recent past, the public was widely exposed to relatively low frequencies from radio and TV stations, to a fairly narrow range of cellular frequencies, and two bands of WiFi. The FCC is in the process of approving dozens of new bands of frequency that will be widely used where people live and work. The fear is not so much about any given frequency being dangerous, but rather a fear that being bombarded by a large range of frequencies will create unforeseen problems.
  • People are also concerned that cellular transmitters are moving from tall towers, which normally have been located away from housing, to small cell sites on poles that are located on residential streets. The fear is that these transmitters are generating a lot of radiation close to the transmitter – which is true. The amount of frequency that strikes a given area decreases rapidly with distance from a transmitter. The anecdote that I’ve seen repeated on social media is of placing a cell site fifteen feet from the bedroom of a child. I have no idea if there is a real small cell site that is the genesis of this claim – but there could be. In dense urban neighborhoods, there are plenty of streets where telephone poles are within a few feet of homes. I admit that I would be leery about having a small cell site directly outside one of my windows.
  • The public worries when they know that there will always be devices that don’t meet the FCC guidelines. As an example, the Chicago Tribune tested eleven smartphones in August and found that a few of them were issuing radiation at twice the FCC maximum-allowable limit. The public understands that vendors play loose with regulatory rules and that the FCC largely ignores such violations.

The public has no particular reason to trust this FCC. The FCC under Chairman Pai has sided with the large carriers on practically every issue in front of the Commission. This is not to say that the FCC didn’t give this docket the full consideration that should be given to all dockets – but the public perception is that this FCC would side with the cellular carriers even if there was a public health danger.

The FCC order is also not particularly helped by citing the buy-in from the Food and Drug Administration on the safety of radiation. That agency has licensed dozens of medicines that later proved to be harmful, so that agency also doesn’t garner a lot of public trust.

The FCC made a few changes with this order. They have mandated a new set of warning signs to be posted around transmitters. It’s doubtful that anybody outside of the industry will understand the meaning of the color-coded warnings. The FCC is also seeking comments on whether exposure standards should be changed for frequencies below 100 kHz and above 6 GHz. The agency is also going to exempt certain kinds of transmitters from FCC testing.

I’ve read extensively on both sides of the issue and it’s impossible to know the full story. For example, a majority of scientists in the field signed a petition to the United Nations warning against using higher frequencies without more testing. But it’s also easy to be persuaded by other scientists who say that higher frequencies don’t even penetrate the skin. I’ve not heard of any studies that look at exposing people to a huge range of different low-power frequencies.

This FCC is in a no-win position. The public properly perceives the agency of being pro-carrier, and anything the FCC says is not going to persuade those worried about radiation risks. I tend to side with the likelihood that the radiation is not a big danger, but I also have to wonder if there will be any impact after expanding by tenfold the range of frequencies we’re exposed to. The fact is that we’re not likely to know until after we’ve all been exposed for a decade.

Killing 3G

I have bad news for anybody still clinging to their flip phones. All of the big cellular carriers have announced plans to end 3G cellular service, and each has a different timeline in mind:

  • Verizon previously said they would stop supporting 3G at the end of 2019, but now says it will end service at the end of 2020.
  • AT&T has announced the end of 3G to be coming in early 2022.
  • Sprint and T-Mobile have not expressed a specific date but are both expected to stop 3G service sometime in 2020 or 2021.

The amount of usage on 3G networks is still significant. GSMA reported that at the end of 2018 that as many as 17% of US cellular customers still made 3G connections, which accounted for as much as 19% of all cellular connections.

The primary reason cited for ending 3G is that the technology is far less efficient than 4G. A 3G connection to a cell site chews up the same amount of frequency resources as a 4G connection yet delivers far less data to customers. The carriers are also anxious to free up mid-range spectrum for upcoming 5G deployment.

Opensignal measures actual speed performance for millions of cellular connections and recently reported the following statistics for the average 3G and 4G download speeds as of July 2019:

4G 2019 3G 2019
AT&T 22.5 Mbps 3.3 Mbps
Sprint 19.2 Mbps 1.3 Mbps
T-Mobile 23.6 Mbps 4.2 Mbps
Verizon 22.9 Mbps 0.9 Mbps

The carriers have been hesitating on ending 3G because there are still significant numbers of rural cell sites that still don’t offer 4G. The cellular carriers were counting on funding from the FCC’s Mobility Fund Phase II to upgrade rural cell sites. However, that funding program got derailed and delayed when the FCC found there were massive errors in the data provided for distributing that fund. The big carriers were accused by many of rigging the data in a way to give more funding to themselves instead of to smaller rural cellular providers.

The FCC staff conducted significant testing of the reported speed and coverage data and released a report of their findings in December 2019. The testing showed that the carriers have significantly overreported 4G coverage and speeds across the country. This report is worth reading for anybody that needs to be convinced of the garbage data that has been used for the creation of FCC broadband maps. I wish the FCC Staff would put the same effort into investigating landline broadband data provided to the FCC. The FCC Staff recommended that the agency should release a formal Enforcement Advisory including ‘a detailing of the penalties associated with carrier filings that violate federal law’.

The carriers are also hesitant to end 3G since a lot of customers still use the technology. Opensignal says there are several reasons for the continued use of 3G. First, 12.7% of users of 3G live in rural areas where 3G is the only cellular technology available. Opensignal says that 4.1% of 3G users still own old flip phones that are not capable of receiving 4G. The biggest category of 3G users are customers that own a 4G capable phone but still subscribe to a 3G data plan. AT&T is the largest provider of such plans and has not forced customers to upgrade to 4G plans.

The carriers need to upgrade rural cell sites to 4G before they can be allowed to cut 3G dead. In doing so they need to migrate customers to 4G data plans and also notify customers who still use 3G-only flip phones that it’s finally time to upgrade.

One aspect of the 3G issue that nobody is talking about is that AT&T says it is using fixed wireless connections to meet its CAF II buildout requirements. Since the CAF II areas include some of the most remote landline customers, it stands to reason that these are the same areas that are likely to still be served with 3G cell towers. AT&T can’t deliver 10/1 Mbps or faster speeds using 3G technology. This makes me wonder what AT&T has been telling the FCC in terms of meeting their CAF II build-out requirements.

US Has Poor Cellular Video

Opensignal recently published a report that looks around the world at the quality of cellular video. Video has become a key part of the cellular experience as people are using cellphones for entertainment, and since social media and advertising have migrated to video.

The use of cellular video is exploding. Netflix reports that 25% of its total streaming worldwide is sent to mobile devices. The new Disney+ app that was just launched got over 3 million downloads of their cellular app in just the first 24 hours. The Internet Advertising Bureau says that 62% of video advertisements are being seen on cellphones. Social media sites that are video-heavy like Instagram and Tik-Tok are growing rapidly.

The pressure on cellular networks to deliver high-quality video is growing. Ericcson recently estimated that video will grow to be almost 75% of all cellular traffic by 2024, up from 60% today. Look back five years, and video was a relatively small component of cellular traffic. To some extent, US carriers have contributed to the issue. T-Mobile includes Netflix in some of its plans; Sprint includes Hulu or Amazon Prime; Verizon just started bundling Disney+ with cellular plans; and AT&T offers premium movie services like HBO or Starz with premium plans.

The quality of US video was ranked 68 out of 100 countries, the equivalent of an F grade. That places our wireless video experience far behind other industrialized countries and puts the US in the same category as a lot of countries from Africa, and South and Central America. One of the most interesting statistics about US video watching is that 38% of users watch video at home using a cellular connection rather than their WiFi connection. This also says a lot about the poor quality of broadband connections in many US homes.

Interestingly, the ranking of video quality is not directly correlated with cellular data speeds. For example, South Korea has the fastest cellular networks but ranked 21st in video quality. Canada has the third-fastest cellular speeds and was ranked 22nd in video quality. The video quality rankings are instead based upon measurable metrics like picture quality, video loading times, and stall rates. These factors together define the quality of the video experience.

One of the reasons that US video quality was rated so low is that the US cellular carriers transmit video at the lowest compression possible to save on network bandwidth. The Opensignal report speculates that the primary culprit for poor US video quality is the lack of cellular spectrum. US cellular carriers are now starting to implement new spectrum bands into phones and there are more auctions for mid-range spectrum coming next year. But it takes 3-4 years to fully integrate new spectrum since it takes time for the cellular carriers to upgrade cell sites and even longer for handsets using a new spectrum to widely penetrate the market.

Only six countries got an excellent rating for video quality – Norway, Czech Republic, Austria, Denmark, Hungary, and the Netherlands. Meanwhile, the US is bracketed on the list between Kyrgyzstan and Kazakhstan.

Interestingly, the early versions of 5G won’t necessarily improve video quality. The best example of this is South Korea that already has millions of customers using what is touted as 5G phones. The country is still ranked 21st in terms of video quality. Cellular carriers treat cellular traffic differently than other data, and it’s often the video delivery platform that is contributing to video problems.

The major fixes to the US cellular networks are at least a few years away for most of the country. The introduction of more small cells, the implementation of more spectrum, and the eventual introduction of the 5G features from the 5G specifications will contribute to a better US cellular video experience. However, with the volume of US cellular broadband volumes doubling every two years, the chances are that the US video rating will drop more before improving significantly. The network engineers at the US cellular companies face an almost unsolvable problem of maintaining network quality while dealing with unprecedented growth.

Modems versus Routers

I have to admit that today’s blog is the result of one of my minor pet peeves – I find myself wincing a bit whenever I hear somebody interchange the words modem and router. That’s easy enough to do since today there are a lot of devices in the world that include both a modem and a router. But for somebody who’s been around since the birth of broadband, there is a big distinction. Today’s blog is also a bit nostalgic as I recalled the many kinds of broadband I’ve used during my life.

Modems. A modem is a device that connects a user to an ISP. Before there were ISPs, a modem made a data connection between two points. Modems are specific to the technology being used to make the connection.

In the picture accompanying this blog is an acoustic coupler, which is a modem that makes a data connection using the acoustic signals from an analog telephone. I used a 300 baud modem (which communicated at 300 bps – bits per second) around 1980 at Southwestern Bell when programming in basic. The modem allowed me to connect my telephone to a company mainframe modem and ‘type’ directly into programs stored on the mainframe.

Modems grew faster over time and by the 1990s we could communicate with a dial-up ISP. The first such modem I recalled using communicated at 28.8 kbps (28,800 bits per second). The technology was eventually upgraded to 56 kbps.

Around 2000, I upgraded to a 1 Mbps DSL modem from Verizon. This was a device that sat next to an existing telephone jack. If I recall, this first modem used ADSL technology. The type of DSL matters, because a customer upgrading to a different variety of DSL, such as VDSL2, has to swap to the appropriate modem.

In 2006 I was lucky enough to live in a neighborhood that was getting Verizon FiOS on fiber and I upgraded to 30 Mbps service. The modem for fiber is called an ONT (Optical Network Terminal) and was attached to the outside of my house. Verizon at the time was using BPON technology. A customer would have to swap ONTs to upgrade to newer fiber technologies like GPON.

Today I use broadband from Charter, delivered over a hybrid coaxial network. Cable modems use the DOCSIS standards developed by CableLabs. I have a 135 Mbps connection that is delivered using a DOCSIS 3.0 modem. If I want to upgrade to faster broadband, I’d have to swap to a DOCSIS 3.1 modem – the newest technology on the Charter network.

Routers. A router allows a broadband connection to be split to connect to multiple devices. Modern routers also contain other functions such as the ability to create a firewall or the ability to create a VPN connection.

The most common kind of router in homes is a WiFi router that can connect multiple devices to a single broadband connection. My first WiFi router came with my Verizon FiOS service. It was a single WiFi device intended to serve the whole home. Unfortunately, my house at the time was built in the 1940s and had plaster walls with metal lathing, which created a complete barrier to WiFi signals. Soon after I figured out the limitations on the WiFi I bought my first Ethernet router and used it to string broadband connections using cat 5 cables to other parts of the house. It’s probably good that I was single at the time because I had wires running all over the house!

Today it’s common for an ISP to combine the modem (which talks to the ISP network) and the router (which talks to the devices in the home) into a single device. I’ve always advised clients to not combine the modem and the WiFi router because if you want to upgrade only one of those two functions you have to replace the device. With separate devices, an ISP can upgrade just one function. That’s going to become an issue soon for many ISPs when customers start asking the ISPs to provide WiFi 6 modems.

Some ISPs go beyond a simple modem and router. For example, most Comcast broadband service to single-family homes provide a WiFi router for the home and a second WiFi router that broadcasts to nearby customers outside the home. These dual routers allow Comcast to claim to have millions of public WiFi hotspots.  Many of my clients are now installing networked router systems for customers where multiple routers share the same network. These network systems can provide strong WiFi throughout a home, with the advantage that the same passwords are usable at each router.