The Impending Cellular Data Crisis

There is one industry statistic that isn’t getting a lot of press – the fact that cellular data usage is more than doubling every two years. You don’t have to plot that growth rate very many years into the future to realize that existing cellular networks will be inadequate to handle the increased demand in just a few years. What’s even worse for the cellular industry is that the growth is the nationwide average. I have many clients who tell me there isn’t nearly that much growth at rural cellular towers – meaning there is likely even faster growth at some urban and suburban towers.

Much of this growth is a self-inflicted wound by the cellular industry. They’ve raised monthly data allowances and are often bunding in free video with cellular service, thus driving up usage. The public is responding to these changes by using the extra bandwidth made available to them.

There are a few obvious choke points that will be exposed with this kind of growth. Current cellphone technology limits the number of simultaneous connections that can be made from any given tower. As customers watch more video they eat up slots on the cell tower that otherwise could have been used to process numerous short calls and text messages. The other big chokepoint is going to be the broadband backhaul feeding each cell cite. When usage grows this fast it’s going to get increasingly expensive to buy leased backbone bandwidth – which explains why Verizon and AT&T are furiously building fiber to cell sites to avoid huge increases in backhaul costs.

5G will fix some, but not all of these issues. The growth is so explosive that cellular companies need to use every technique possible to make cell towers more efficient. Probably the best fix is to use more spectrum. Adding an additional spectrum to a cell site immediately adds capacity. However, this can’t happen overnight. Any new spectrum is only useful if customers can use it and it takes a number of years to modify cell sites and cellphones to work on a new spectrum. The need to meet growing demand is the primary reason that the CTIA recently told the FCC they need an eye-popping 400 MHz of new mid-range spectrum for cellular use. The industry painted that as being needed for 5G, but it’s needed now for 4G LTE.

Another fix for cell sites is to use existing frequency more efficiently. The most promising way to do this is with the use of MIMO antenna arrays – a technology to deploy multiple antennas in cellphones to combine multiple spectrum together to create a larger data pipe. MIMO technology can make it easier to respond to a request from a large bandwidth user – but it doesn’t relieve the overall pressure on a cell tower. If anything, it might do the exact opposite and let cell towers prioritize those that want to watch video over smaller users who might then be blocked from making voice calls or sending text messages. MIMO is also not an immediate fix and also needs to work through the cycle of getting the technology into cellphones.

The last strategy is what the industry calls densification, which is adding more cell sites. This is the driving force behind placing small cell sites on poles in areas with big cellular demand. However, densification might create as many problems as it solves. Most of the current frequencies used for cellular service travel a decent distance and placing cell sites too close together will create a lot of interference and noise between neighboring towers. While adding new cell sites adds additional local capacity, it also decreases the efficiency of all nearby cell sites using traditional spectrum – the overall improvement from densification is going to be a lot less than might be expected. The worse thing about this is that interference is hard to predict and is very much a local issue. This is the primary reason that the cellular companies are interested in millimeter wave spectrum for cellular – the spectrum travels a short distance and won’t interfere as much between cell sites placed closely together.

5G will fix some of these issues. The ability of 5G to do frequency slicing means that a cell site can provide just enough bandwidth for every user – a tiny slice of spectrum for a text message or IoT signal and a big pipe for a video stream. 5G will vastly expand the number of simultaneous users that can share a single cell site.

However, 5G doesn’t provide any additional advantages over 4G in terms of the total amount of backhaul bandwidth needed to feed a cell site. And that means that a 5G cell site will get equally overwhelmed if people demand more bandwidth than a cell site has to offer.

The cellular industry has a lot of problems to solve over a relatively short period of time. I expect that in the middle of the much-touted 5G roll-out we are going to start seeing some spectacular failures in the cellular networks at peak times. I feel sympathy for cellular engineers because it’s nearly impossible to have a network ready to handle data usage that doubles every two years. Even should engineers figure out strategies to handle five or ten times more usage, in only a few years the usage will catch up to those fixes.

I’ve never believed that cellular broadband can be a substitute for landline broadband. Every time somebody at the FCC or a politician declares that the future is wireless I’ve always rolled my eyes, because anybody that understands networks and the physics of spectrum can easily demonstrate that there are major limitations on the total bandwidth capacity at a given cell site, along with a limit on how densely cell sites can be packed in an area. The cellular networks are only carrying 5% of the total broadband in the country and it’s ludicrous to think that they could be expanded to carry most of it.

5G For Rural America?

FCC Chairman Ajit Pai recently addressed the NTCA-The Rural Broadband Association membership and said that he saw a bright future for 5G in rural America. He sees 5G as a fixed-wireless deployment that fits in well with the fiber deployment already made by NTCA members.

The members of NTCA are rural telcos and many of these companies have upgraded their networks to fiber-to-the-home. Some of these telcos tackled building fiber a decade or more ago and many more are building fiber today using money from the ACAM program – part of the Universal Service Fund.

Chairman Pai was talking to companies that largely have been able to deploy fiber, and since Pai is basically the national spokesman for 5G it makes sense that he would try to make a connection between 5G and rural fiber. However, I’ve thought through every business model for marrying 5G and rural fiber and none of them make sense to me.

Consider the use of millimeter wave spectrum in rural America. I can’t picture a viable business case for deploying millimeter wave spectrum where a telco has already deployed fiber drops to every home. No telco would spend money to create wireless drops where they have already paid for fiber drops. One of the biggest benefits from building fiber is that it simplifies operations for a telco – mixing two technologies across the same geographic footprint would add unneeded operational complications that nobody would tackle on purpose.

The other business plan I’ve heard suggested is to sell wholesale 5G connections to other carriers as a new source of income. I also can’t imagine that happening. Rural telcos are going to fight hard to keep out any competitor that wants to use 5G to compete with their existing broadband customers. I can’t imagine a rural telco agreeing to provide fiber connections to 5G transmitters that would sit outside homes and compete with their existing broadband customers, and a telco that lets in a 5G competitor would be committing economic suicide. Rural business plans are precarious, by definition, and most rural markets don’t generate enough profits to justify two competitors.

What about using 5G in a competitive venture where a rural telco is building fiber outside of their territory? There may come a day when wireless loops have a lower lifecycle cost than fiber loops. But for now, it’s hard to think that a wireless 5G connection with electronics that need to be replaced at least once a decade can really compete over the long-haul with a fiber drop that might last 50 or 75 years. If that math flips we’ll all be building wireless drops – but that’s not going to happen soon. It’s probably going to take tens of millions of installations of millimeter wave drops until telcos trust 5G as a substitute for fiber.

Chairman Pai also mentioned mid-range spectrum in his speech, specifically the upcoming auction for 3.5 GHz spectrum. How might mid-range spectrum create a rural 5G play that works with existing fiber? It might be a moot question since few rural telcos are going to have access to licensed spectrum.

But assuming that telcos could find mid-range licensed spectrum, how would that benefit from their fiber? As with millimeter wave spectrum, a telco is not going to deploy this technology to cover the same areas where they already have fiber connections to homes. The future use of mid-range spectrum will be the same as it is today – to provide wireless broadband to customers that don’t live close to fiber. The radios will be placed on towers, the taller the better. These towers will then make connections to homes using dishes that can communicate with the tower.

Many of the telcos in the NTCA are already deploying this fixed wireless technology today outside of their fiber footprint. This technology benefits from having towers fed by fiber, but this rarely the same fiber that a telco is using to serve customers. In most cases this business plan requires extending fiber outside of the existing service footprint – and Chairman Pai said specifically that he saw advantage for 5G from existing fiber.

Further, it’s a stretch to label mid-range spectrum point-to-multipoint radio systems as 5G. From what numerous engineers have told me, 5G is not going to make big improvements over the way that fixed wireless operates today. 5G will add flexibility for the operator to fine-tune the wireless connection to any given customer, but the 5G technology won’t inherently increase the speed of the wireless broadband connection.

I just can’t find any business plan that is going to deliver 5G in rural America that takes advantage of the fiber that the small telcos have already built. I would love to hear from readers who might see a possibility that I have missed. I’ve thought about this a lot and I struggle to find the benefits for 5G in rural markets that Chairman Pai has in mind. 5G clearly needs a fiber-rich environment – but companies who have already built rural fiber-to-the-home are not going to embrace a second overlay technology or openly allow competitors onto their networks.

Google Fiber Leaving Louisville

Most readers have probably heard by now that Google Fiber is leaving Louisville because of failures with their fiber network. They are giving customers two months of free service and sending them back to the incumbent ISPs in the city. The company used a construction technique called micro-trenching where they cut a tiny slit in the road, one inch wide and few inches deep to carry the fiber. Only a year after construction the fiber is popping out of the micro-trenches all over the city.

Everybody I’ve talked to is guessing that it’s a simple case of ice heaving. While a micro-trench is sealed, it’s likely that small amounts of moisture seep into the sealed micro-trench and freezes when it gets cold. The first freeze would create tiny cracks, and with each subsequent freeze the cracks would get a little larger until the trench finally fills up with water, fully freezes and ejects the fill material. The only way to stop this would be to find a permanent seal that never lets in moisture. That sounds like a tall task in a city like Louisville that might freeze and thaw practically every night during the winter.

Nobody other than AT&T or Charter can be happy about this. The reason that Google Fiber elected to use micro-trenching is that both big ISPs fought tooth and nail to block Google Fiber from putting fiber on the utility poles in the city. The AT&T suit was resolved in Google’s favor, with the Charter one is still in court. Perhaps Google Fiber should have just waited out the lawsuits – but the business pressure was there to get something done. Unfortunately, the big ISPs are being rewarded for their intransigence.

One obvious lesson learned is not to launch a new network using an untried and untested construction technique. In this case, the micro-trenches didn’t just fail, they failed spectacularly, in the worst way imaginable. Google Fiber says the only fix for the problem would be to build the network again from scratch, which makes no financial sense.

Certainly, the whole industry is going to now be extremely leery about micro-trenching, but there is a larger lesson to be learned from this. For example, I’ve heard from several small ISPs who are ready to leap into the 5G game and build networks using millimeter wave radios installed on poles. This is every bit a new and untested technology like micro-trenching. I’m not predicting that anybody pursuing that business plan will fail – but I can assuredly promise that they will run into unanticipated problems.

Over my career, I can’t think of a single example where an ISP that took a chance on a cutting-edge technology didn’t have big problems – and some of those problems were just as catastrophic as what Google Fiber just ran into. For example, I can remember half a dozen companies that tried to deploy broadband networks using the LMDS spectrum. I remember one case where the radios literally never worked and the venture lost their $2 million investment. I remember several others where the radios had glitches that caused major customer outages and were largely a market disaster.

One thing that I’ve seen over and over is that telecom vendors take shortcuts. When they introduce a new technology they are under extreme pressure to get it to market and drive new revenues. Ideally, a vendor would hold small field trials of new technology for a few years to work out the bugs. But if a vendor finds an ISP willing to take a chance on a beta technology, they are happy to let the customers of that ISP be the real guinea pigs for the technology, and for the ISP to take the hit for the ensuing problems.

I can cite similar stories for the first generation of other technologies including the first generation of DSL, WiFi mesh networks, PON fiber-to-the-home and IPTV. The companies that were the first pioneers deploying these technologies had costly and sometimes deadly problems. So perhaps the lesson learned is that pioneers pay a price. I’m sure that this failure of micro-trenching will result in changing or abandoning the technique. Perhaps we’ll learn to not use micro-trenches in certain climates. Or perhaps they’ll find a way to seal the micro-trenches against humidity. But none of those future solutions will make up for Google Fiber’s spectacular failure.

The real victims of this situation are the households in Louisville who had changed to Google Fiber – and everybody else in the City. Because of Google Fiber’s lower prices, both Charter and AT&T lowered prices everywhere in the city. You can bet it’s not going to take long to get the market back to full prices. Any customers crawling back to the incumbents from Google Fiber can probably expect to pay full price immediately – there is no real incentive to give them a low-price deal. As a whole, every household in the City is going to be spending $10 or $20 more per month for broadband – which is a significant penalty on the local economy.

Breakthroughs in Light Research

It’s almost too hard to believe, but I’ve heard network engineers suggest that we may soon exhaust the bandwidth capacity our busiest backbone fiber routes, particularly in the northeast. At the rate that our use of data is growing, we will outgrow the total capacity of existing fibers unless we develop faster lasers or build new fiber. The natural inclination is to build more fiber – but at the rate our data is growing, we would consume the capacity of new fibers almost as quickly as they are built. Lately scientists have been working on the problem and there have been a lot of breakthroughs in working with light in ways that can enhance laser communications.

Twisted Light. Dr. Haoran and a team at the RMIT School of Science on Melbourne, Australia have developed a nanophotonic device that lets them read twisted light. Scientists have found ways to bend light into spirals in a state known as orbital angular momentum (OAM). The twisted nature of the light beams presents the opportunity to encode data significantly more data than straight-path laser beams due to the convoluted configuration of the light beam. However, until now nobody has been able to read more than a tiny segment of the twisted light.

The team has developed a nano-detector that that separate the twisted light states into a continuous order, enabling them to both code and decode using a wider range of the OAM light beam. The reliever is made of readily available materials and that should make it inexpensive and scalable for industrial production. The team at RMIT believes with refinement that the detector could bring about more than a 100-times increase in the amount of data that could be carried on one fiber. The nature of the detector also should enable it to receive quantum data from the quickly emerging field of quantum computing.

Laser Bursts Generate Electricity. A team led by Ignacio Franco at the University of Rochester along with a team from the University of Hong King have discovered how to use lasers to generate electricity directly inside chips. They are using a glass thread that is a thousand times thinner than a human hair. If they hit this thread with a short laser burst of one millionth of one billionth of a second they’ve found that for a brief moment the glass acts like a metal and generates an electric current.

One of the biggest limitations on silicon computer chips is moving signal into the chip quickly. With this technique an electrical pulse can be created directly inside of the chip where and when it’s needed, meaning a several magnitude improvement in the speed of getting signals to chip components. The direction and magnitude of the current created can be controlled by varying the shape of the laser beam, by changing its phase. This also could lead to the development of tiny chips operating just above the size of simple molecules.

Infrared Computer Chips. Teams of scientists a the University of Regensburg, in Germany and the University of Michigan have discovered how to use infrared lasers to shift electrons between two states pf angular momentum on a thin sheet of semiconductor material. Flipping between the two electron states creates the classic 1 and 0 needed for computing, at the electron level. Ordinary electrons operate in the gigahertz range, meaning there is a limit of about 1 billion interfaces with electrons possible for a device in a second. Being able to directly change the state of an electron could speed this up as much as a million times.

The scientists think it is possible to build a ‘lightwave’ computer that would have a million-times faster time clock than today’s fastest chips. The next challenge is to develop the train of lasers that can product the desired flips between the two states as needed. This process could also unleash quantum computing. The biggest current drawback of quantum computing is that the qubits – the output of a quantum computation – don’t last very long. A much faster time clock could easily work inside of the quantum time frames.

Breaking the Normal Rules of Light.

Scientists at the National Physics Laboratory in England have developed a technique that changes the fundamental nature of light. Light generally moves through the world as a wave. The scientists created a device they are calling an optical ring resonator. They bend light into continuous rings, and as the light in the rings interact that create unique patters that differ significantly from normal light. The light loses its vertical polarization (the wave peak) and begins moving in ellipses. The scientists hope that by manipulating light they will be able to develop new designs for atomic clocks and quantum computers.

Facebook Takes a Stab at Wireless Broadband

Facebook has been exploring two technologies in its labs that they hope will make broadband more accessible for the many communities around the world that have poor or zero broadband. The technology I’m discussing today is Terragraph which uses an outdoor 60 GHz network to deliver broadband. The other is Project ARIES which is an attempt to beef up the throughput on low-bandwidth cellular networks.

The Terragraph technology was originally intended as a way to bring street-level WiFi to high-density urban downtowns. Facebook looked around the globe and saw many large cities that lack basic broadband infrastructure – it’s nearly impossible to fund fiber in third world urban centers. The Terragraph technology uses 60 GHz bandwidth and the 802.11ay standard – this technology combination was originally called AirGig.

Using 60GHz and 801.11ay together is an interesting choice for an outdoor application. On a broadcast basis (hotspot) this frequency only carries between 35 and 100 feet depending upon humidity and other factors. The original intended use of the AirGig was as an indoor gigabit wireless network for offices. The 60 GHz spectrum won’t pass through anything, so it was intended to be a wireless gigabit link within a single room. 60 GHz faces problems as an outdoor technology since the frequency is absorbed by both oxygen and water vapor. But numerous countries have released 60Ghz as unlicensed spectrum, making it available without costly spectrum licenses, and the channels are large enough to still be able to deliver bandwidth even with the physical limitations.

It turns out that a focused beam of 60 GHz spectrum will carry up to about 250 meters when used as backhaul. The urban Terragraph network planned to mount 60 GHz units on downtowns poles and buildings. These units would act as both hotspots and to create a backhaul mesh network between units. This is similar to the WiFi networks we saw being tried in a few US cities almost twenty years ago. The biggest downside to the urban idea is the lack of cheap handsets that can use this frequency.

Facebook took a right turn on the urban idea and completed a trial of the technology deployed in a different network design. Last May Facebook worked with Deutsche Telekom to deploy a fixed Terragraph network in Mikebuda, Hungary. This is a small town of about 150 homes covering 0.4 square kilometers – about 100 acres. This is drastically different than a dense urban deployment with a far lower housing density than US suburbs – this is similar to many small rural towns in the US with large lots, and empty spaces between homes. The only current broadband in the town was about 100 DSL customers.

In a fixed mesh network every unit deployed is part of the mesh network each unit can deliver bandwidth into that home as well as bounce signal to the next home. In Mikebuda the two companies decided that the ideal network would be to serve 50 homes (not sure why they couldn’t serve all 100 of the DSL customers). The network is delivering about 650 Mbps to each home, although each home is limited to about 350 Mbps due to the limitations of the 802.11ac WiFi routers inside the home. This is a big improvement over the 50 Mbps DSL that is being replaced.

The wireless mesh network is quick to install and the network was up and running to homes within two weeks. The mesh network configures itself and can instantly reroute and heal to replace a bad mesh unit. The biggest local drawback is the need for pure line-of-sight since 60 GHz can’t tolerate any foliage or other impediments, and tree trimming was needed to make this work.

Facebook envisions this fixed deployment as a way to bring bandwidth to the many smaller towns that surround most cities. However, they admit in the third world that the limitation will be for backhaul bandwidth since the third world doesn’t typically have much middle mile fiber outside of cities – so figuring out how to get the bandwidth to the small towns is a bigger challenge than serving the homes within a town. Even in the US, the cost of bandwidth to reach a small town is often the limiting factor on affordably building a broadband solution. In the US this will be a direct competitor to 5G for serving small towns. The Terragraph technology has the advantage of using unlicensed spectrum, but ISPs are going to worry about the squirrelly nature of 60 GHz spectrum.

Assuming that Facebook can find a way to standardize the equipment and get it into mass production, then this is another interesting wireless technology to consider. Current point-to-multipoint wireless network don’t work as well in small towns as they do in rural areas, and this might provide a different way for a WISP to serve a small town. In the third world, however, the limiting factor for many of the candidate markets will be getting backhaul bandwidth to the towns.

The Physics of Millimeter Wave Spectrum

Many of the planned used for 5G rely upon the use of millimeter wave spectrum, and like every wireless technology the characteristics of the spectrum defines both the benefits and limitations of the technology. Today I’m going to take a shot at explaining the physical characteristics of millimeter wave spectrum without using engineering jargon.

Millimeter wave spectrum falls in the range of 30 GHz to 300 GHz, although currently there has been no discussion yet in the industry of using anything higher than 100 GHz. The term millimeter wave describes the shortness of the radio waves which are only a few millimeters or less in length. The 5G industry is also using spectrum that is a little longer than millimeter waves size such as 24 GHz and 28 GHz – but these frequencies share a lot of the same operating characteristics.

There are a few reasons why millimeter wave spectrum is attractive for transmitting data. The millimeter spectrum has the capability of carrying a lot of data, which is what prompts discussion of using millimeter wave spectrum to deliver gigabit wireless service. If you think of radio in terms of waves, then the higher the frequency the greater the number of waves that are being emitted in a given period of time. For example, if each wave carries one bit of data, then a 30 GHz transmission can carry more bits in one second than a 10 GHz transmission and a lot more bits than a 30 MHz transmission. It doesn’t work exactly like that, but it’s a decent analogy.

This wave analogy also defines the biggest limitation of millimeter wave spectrum – the much shorter effective distances for using this spectrum. All radio waves naturally spread from a transmitter, and in this case thinking of waves in a swimming pool is also a good analogy. The further across the pool a wave travels, the more dispersed the strength of the wave. When you send a big wave across a swimming pool it’s still pretty big at the other end, but when you send a small wave it’s often impossible to even notice it at the other side of the pool. The small waves at millimeter length die off faster. With a higher frequency the waves are also closer together. Using the pool analogy, that means that the when waves are packed tightly together then can more easily bump into each other and become hard to distinguish as individual waves by the time they get to the other side of the pool. This is part of the reason why shorter millimeter waves don’t carry as far as other spectrum.

It would be possible to send millimeter waves further by using more power – but the FCC limits the allowed power for all radio frequencies to reduce interference and for safety reasons. High-power radio waves can be dangerous (think of the radio waves in your microwave oven). The FCC low power limitation greatly reduces the carrying distance of this short spectrum.

The delivery distance for millimeter waves can also be impacted by a number of local environmental conditions. In general, shorter radio waves are more susceptible to disruption than longer spectrum waves. All of the following can affect the strength of a millimeter wave signal:

  • Mechanical resonance. Molecules of air in the atmosphere naturally resonate (think of this as vibrating molecules) at millimeter wave frequencies, with the biggest natural interference coming at 24 GHz and 60 GHz.
  • Atmospheric absorption. The atmosphere naturally absorbs (or cancels out) millimeter waves. For example, oxygen absorption is highest at 60 GHz.
  • Millimeter waves are easily scattered. For example, the millimeter wave signal is roughly the same size as a raindrop, so rain will scatter the signal.
  • Brightness temperature. This refers to the phenomenon where millimeter waves absorb high frequency electromagnetic radiation whenever they interact with air or water molecules, and this degrades the signal.
  • Line-of-sight. Millimeter wave spectrum doesn’t pass through obstacles and will be stopped by leaves and almost everything else in the environment. This happens to some degree with all radio wavs, but at lower frequencies (with longer wavelengths) the signal can still get delivered by passing through or bouncing off objects in the environment (such as a neighboring house and still reach the receiver. However, millimeter waves are so short that they are unable to recover from collision with an object between the transmitter and receiver and thus the signal is lost upon collision with almost anything.

One interesting aspect of these spectrum is that the antennas used to transmit and receive millimeter wave spectrum are tiny and you can squeeze a dozen or more antenna into a square inch. One drawback of using millimeter wave spectrum for cellphones is that it takes a lot of power to operate multiple antennas, so this spectrum won’t be practical for cellphones until we get better batteries.

However, the primary drawback of small antennas is the small target area used to receive a signal. It doesn’t take a lot of spreading and dispersion of the signal to miss the receiver. For spectrum in the 30 GHz range the full signal strength (and maximum bandwidth achievable) to a receiver can only carry for about 300 feet. With greater distances the signal continues to spread and weaken, and the physics show that the maximum distance to get any decent bandwidth at 30 GHz is about 1,200 feet. It’s worth noting that a receiver at 1,200 feet is receiving significantly less data than one at a few hundred feet. With higher frequencies the distances are even less. For example, at 60 GHz the signal dies off after only 150 feet. At 100 GHz the signal dies off in 4 – 6 feet.

To sum all of this up, millimeter wave transmission requires a relatively open path without obstacles. Even in ideal conditions a pole-mounted 5G transmitter isn’t going to deliver decent bandwidth past about 1,200 feet, with the effective amount of bandwidth decreasing as the signal travels more than 300 feet. Higher frequencies mean even less distance. Millimeter waves will perform better in places with few obstacles (like trees) or where there is low humidity. Using millimeter wave spectrum presents a ton of challenges for cell phones – the short distances are a big limitation as well as the extra battery life needed to support extra antennas. Any carrier that talks about deploying millimeter wave in a way that doesn’t fit the basic physics is exaggerating their plans.

The Huge CenturyLink Outage

At the end of December CenturyLink had a widespread network outage that lasted over two days. The outage disrupted voice and broadband service across the company’s wide service territory.

Probably the most alarming aspect pf the outage is that it knocked out the 911 systems in parts of fourteen states. It was reported that calls to 911 might get a busy signal or a recording saying that “all circuits are busy’. In other cases, 911 calls were routed to the wrong 911 center. Some jurisdictions responded to the 911 problems by sending out emergency text messages to citizens providing alternate telephone numbers to dial during an emergency. The 911 service outages prompted FCC Chairman Ajit Pai to call CenturyLink and to open a formal investigation into the outage.

I talked last week to a resident of a small town in Montana who said that the outage was locally devasting. Credit cards wouldn’t work for most of the businesses in town including at gas stations. Businesses that rely on software in the cloud for daily operations like hotels were unable to function. Bank ATMs weren’t working. Customers with CenturyLink landlines had spotty service and mostly could not make or receive phone calls. Worse yet, cellular service in the area largely died, meaning that CenturyLink must have been supplying the broadband circuits supporting the cellular towers.

CenturyLink reported that the outage was caused by a faulty networking management card in a Colorado data center that was “propagating invalid frame packets across devices”. It took the company a long time to isolate the problem, and the final fix involved rebooting much of the network electronics.

Every engineer I’ve spoken to about this says that in today’s world it’s hard to believe that it would take 2 days to isolate and fix a network problem caused by a faulty card. Most network companies operate a system of alarms that instantly notify them when any device or card is having problems. Further, complex networks today are generally supplied with significant redundancy that allows the isolation of troubled components of a network in order to stop the kind of cascading outage that occurred in this case. The engineers all said that it’s almost inconceivable to have a single component like a card in a modern network that could cause such a huge problem. While network centralization can save money, few companies route their whole network through choke points – there are a dozen different strategies to create redundancy and protect against this kind of outage.

Obviously none of us knows any of the facts beyond the short notifications issued by CenturyLink at the end of the outage, so we can only speculate about what happened. Hopefully the FCC enquiry will uncover the facts – and it’s important that they do so, because it’s always possible that the cause of the outage is something that others in the industry need to be concerned about.

I’m only speculating, but my guess is that we are going to find that the company has not implemented best network practices in the legacy telco network. We know that CenturyLink and the other big telcos have been ignoring the legacy networks for decades. We see this all of the time when looking at the conditions of the last mile network, and we’ve always figured that the telcos were also not making the needed investments at the network core.

If this outage was caused by outdated technology and legacy network practices then such outages are likely to recur. Interestingly, CenturyLink also operates one of the more robust enterprise cloud services in the country. That business got a huge shot in the arm through the merger with Level 3, with new management saying that all of their future focus is going to be on the enterprise side of the house. I have to think that this outage didn’t much touch that network, just more likely the legacy network.

One thing for sure is that this outage is making CenturyLink customers look for an alternative. A decade ago the local government in Cook County, Minnesota – the northern-most county in the state – was so frustrated by continued prolonged CenturyLink network outages that they finally built their own fiber-to-the-home network and found alternate routing into and out of the County. I talked to one service provider in Montana who said they’ve been inundated after this recent outage by businesses looking for an alternate to CenturyLink.

We have become so reliant on the Internet that major outages are unacceptable. Much of what we do everyday relies on the cloud. The fact that this outage extended to cellular outages, a crash of 911 systems and the failure of credit card processing demonstrates how pervasive the network is in the background of our daily lives. It’s frightening to think that there are legacy telco networks that have been poorly maintained that can still cause these kinds of widespread problems.

I’m not sure what the fix is for this problem. The FCC supposedly washed their hands of the responsibility for broadband networks – so they might not be willing to tackle any meaningful solutions to prevent future network crashes. Ultimately the fix might the one found by Cook County, Minnesota – communities finding their own network solutions that bypass the legacy networks.

A Strategy for Upgrading GPON

I’ve been asked a lot during 2018 if fiber overbuilders ought to be considering the next generation of PON technology that might replace GPON. They hear about the newer technologies from vendors and the press. For example, Verizon announced a few months ago that they would begin introducing Calix NGPON2 into their fiber network next year. The company did a test using the technology recently in Tampa and achieved 8 Gbps speeds. AT&T has been evaluating the other alternate technology, XGS-PON, and may be introducing it into their network in 2019.

Before anybody invests a lot of money in a GPON network it’s a good idea to always ask if there are better alternatives – as should be done for every technology deployed in the network.

One thing to consider is how Verizon plans on using NGPON2. They view this as the least expensive way to deliver bandwidth to a 5G network that consists of multiple small cells mounted on poles. They like PON technology because it accommodates multiple end-points using a single last-mile fiber, meaning a less fiber-rich network than with other 10-gigabit technologies. Verizon also recently began the huge task of consolidating their numerous networks and PON gives them a way to consolidate multi-gigabit connections of all sorts onto a single platform.

Very few of my clients operate networks that have a huge number of 10-gigabit local end points. Anybody that does should consider Verizon’s decision because NGPON2 is an interesting and elegant solution for handling multiple large customer nodes while also reducing the quantity of lit fibers in the network.

Most clients I work with operate PON networks to serve a mix of residential and business customers. The first question I always ask them is if a new technology will solve an existing problem in their network. Is there anything that a new technology can do that GPON can’t do? Are my clients seeing congestion in neighborhood nodes that are overwhelming their GPON network?

Occasionally I’ve been told that they want to provide faster connections to a handful of customers for which the PON network is not sufficient – they might want to offer dedicated gigabit or larger connections to large businesses, cell sites or schools. We’ve always recommended that clients design networks with the capability of large Ethernet connections external to the PON network. There are numerous affordable technologies for delivering a 10-gigabit pipe directly to a customer with active Ethernet. It seems like overkill to consider upgrading the electronics to all customers to satisfy the need of a few large customers rather than overlaying a second technology into the network. We’ve always recommended that networks have some extra fiber pairs in every neighborhood exactly for this purpose.

I’ve not yet heard an ISP tell me that they are overloading a residential PON network due to customer data volumes. This is not surprising. GPON was introduced just over a decade ago, and at that time the big ISPs offered speeds in the range of 25 Mbps to customers. GPON delivers 2.4 gigabits to up to 32 homes and can easily support residential gigabit service. At the time of introduction GPON was at least a forty-times increase in customer capacity compared to DSL and cable modems – a gigantic leap forward in capability. It takes a long time for consumer household usage to grow to fill that much new capacity. The next biggest leap forward we’ve seen was the leap from dial-up to 1 Mbps DSL – a 17-times increase in capacity.

Even if somebody starts reaching capacity on a GPON there are some inexpensive upgrades that are far less expensive than upgrading to a new technology. A GPON network won’t reach capacity evenly and would see it in some neighborhood nodes first. The capacity in a neighborhood GPON node can easily be doubled by cutting the size of the node in half by splitting it to two PONs. I have one client that did the math and said that as long as they can buy GPON equipment they would upgrade by splitting a few times – from 32 to 16 homes and from 16 homes to 8 homes, and maybe even from 8 to 4 customers before they’d consider tearing out GPON for something new. Each such split doubles capacity and splitting nodes three times would be an 8-fold increase in capacity. If we continue on the path of seeing household bandwidth demand double every three years, then splitting nods twice would easily add more than another decade to the life of a PON network. In doing that math it’s important to understand that splitting a node actually more than doubles capacity because it also decreases the oversubscription factor for each customer on the node.

AT CCG we’ve always prided ourselves on being technology neutral and vendor neutral. We think network providers should use the technology that most affordably fits the needs of their end users. We rarely see a residential fiber network where GPON is not the clear winner from a cost and performance perspective. We have clients using numerous active Ethernet technologies that are aimed at serving large businesses or for long-haul transport. But we are always open-minded and would easily recommend NGPON2 or XGS-PON if it is the best solution. We just have not yet seen a network where the new technology is the clear winner.

How Much Better is 802.11ax?

The new WiFi standard 802.11ax is expected to be ratified and released as a standard sometime next year. In the new industry nomenclature this now be called WiFi-6. A lot of the woes we have today with bandwidth in our home is due to the current 802.11ac standard that this will be replacing. 802.11ax will introduce a number of significant improvements that ought to improve home WiFi performance.

To understand why these improvements are important we need to first understand the shortcomings of the current WiFi protocols. The industry groups that developed the current WiFi standards had no idea that WiFi would become so prevalent and that the average home might have dozens of WiFi capable devices. The current problems all arise from a WiFi router trying to satisfy multiple demands for a data stream from multiple devices. Unlike cellular technologies, WiFi has no central traffic cop and every device in the environment can make an equal claim for connectivity. When a WiFi router has more demands for usage than it has available channels it pauses and interrupts all data streams until it chooses how to reallocate bandwidth. In a busy environment these stops and restarts can be nearly continuous.

The improvements from 802.11ax will all come from smarter ways to handle requests for connectivity from multiple devices. There is only a small improvement in overall bandwidth with a raw physical data rate of 500 Mbps compared to 422 for 802.11ac. Here are the major new innovations:

Orthogonal Frequency-Division Multiple Access (OFDMA). This improvement will likely have the biggest impact in a home. OFDMA can slice the few big existing WiFi channels into smaller channels, being called resource units. A router will be able to make multiple smaller bandwidth connections using resource units and avoid packet collision and the start/stop cycle of each device asking for primary connectivity.

Bi-Directional Multi-User MIMO. In the last few years we’ve seen home WiFi routers introduce MIMO, which uses multiple antennas to make connections to different devices. This solves one of the problems of WiFi by allowing multiple devices to download separate data streams at the same time without interference. But today’s WiFi MIMO still has one big problem in that the MIMO only work for downloading. Whenever there is a request for any device to use a channel for uploading, today’s MIMO pauses all the downloading streams. Bi-Directional MIMO will allow for 2-way data streams meaning that a request to upload won’t kill downstream transmissions.

Spatial Frequency Reuse. This will have the most benefit in apartments or in homes that have networked multiple WiFi routers. Today a WiFi transmission will pause for any request for connection, even for connections made to a neighbor’s router from the neighbor’s devices. Spatial Frequency Reuse doesn’t fix that problem, but it allows neighboring 802.11.ax routers to coordinate and to adjust the power of transmission requests to increase the chance that a device can connect to and maintain a connection to the proper router.

Target Wake Time. This will allow small devices to remain silent most of the time and only communicate at specific and pre-set times. Today a WiFi router can’t distinguish between a request from a smart blender and a smart TV, and requests from multiple small devices can badly interfere with the streams we care about to big devices. This feature will reduce, and distribute over time the requests for connectivity from the ever-growing horde of small devices we all have.

There’s no rush to go out and buy and 802.11ax router, although tech stores will soon be pushing them. Like all generations of WiFi they will be backwards compatible with earlier WiFi standards, but for a few years they won’t do anything differently than your current router. This is because all of the above features require updated WiFi edge devices that also contain the new 802.11ax standard. There won’t be many devices manufactured with the new standard even in 2019. Even after we introduce 802.11ax devices into our home we’ll continue to be frustrated since our older WiFi edge devices will continue to communicate in the same inefficient way as today.

Private 5G Networks

One of the emerging uses for 5G is to create private 5G cellular networks for large businesses. The best candidates for 5G technology are businesses that need to connect and control a lot of devices or those that need the low latency promised by the 5G standards. This might include businesses like robotized factories, chemical plants, busy shipping ports and airports.

5G has some advantages over other technologies like WiFi, 4G LTE and Ethernet that makes it ideal for communications rich environments. Cellular network can replace the costly and bulky hard-wired networks needed for Ethernet. It’s not practical to wire an Ethernet network to the hordes of tiny IoT sensors that are needed to operate a modern manufacturing factory. It’s also not practical to have a hard-wired network in a dynamic environment where equipment needs to be moved for various purposes.

5G holds a number of advantages over WiFi and 4G. Frequency slicing means that just the right amount of bandwidth can be delivered to every device in the factory, from the smallest sensor to devices that must upload or download large amounts of data. The 5G standard also allows for setting priorities by device so that mission critical devices always get priority over background devices. The low latency on 5G means that there can be real time coordination and feedback between devices when that’s needed for time-critical manufacturing devices. 5G also offers the ability to communicate simultaneously with a huge number of devices, something that is not practical or possible with WiFi or LTE.

Any discussion of IoT in the past has generally evoked discussion of factories with huge number of tiny sensors that monitor and control every aspect of the manufacturing process. While there have been big strides in developing robotized factories, that concept of a concentrated communications mesh to control the factories has not been possible until the 5G standard.

We are a few years away from having 5G networks that can deliver on all of the promised benefits of the standard. The big telecom manufacturers like Ericsson, Huawei, Qualcomm and Nokia along with numerous smaller companies are working on perfecting the technology and the devices that will support advanced IoT networks.

I read that an Audi plant in Germany is already experimenting with a private cellular network to control the robots that glue car components together. Its robot networks were hard-wired and were not providing fast enough feedback to the robots for the needed precision of the tasks. The company says it’s pleased with the performance so far. However, that test was not yet real 5G and any real use of 5G in factories is still a few years off as manufacturers perfect the wireless technology and perfect the sensor networks.

Probably the biggest challenge in the US will be finding the spectrum to make this work. In the US most of the spectrum that is best suited to operating a 5G factory are sold in huge geographic footprints and the spectrum will be owned by the typical large spectrum holders. Large factory owners might agree to lease spectrum from the large carriers, but they are not going to want those carriers to insert themselves into the design or operation of these complex networks.

In Europe there are already discussions at the various regulatory bodies on possibly setting aside spectrum for factories and other large private users. However, in this country to do so means opening the door to allowing the spectrum to be sold for smaller footprints – something the large wireless carriers would surely challenge. It would be somewhat ironic if the US takes the lead in developing 5G technology but then can’t make it work in factories due to our spectrum allocation policies.