The Huge CenturyLink Outage

At the end of December CenturyLink had a widespread network outage that lasted over two days. The outage disrupted voice and broadband service across the company’s wide service territory.

Probably the most alarming aspect pf the outage is that it knocked out the 911 systems in parts of fourteen states. It was reported that calls to 911 might get a busy signal or a recording saying that “all circuits are busy’. In other cases, 911 calls were routed to the wrong 911 center. Some jurisdictions responded to the 911 problems by sending out emergency text messages to citizens providing alternate telephone numbers to dial during an emergency. The 911 service outages prompted FCC Chairman Ajit Pai to call CenturyLink and to open a formal investigation into the outage.

I talked last week to a resident of a small town in Montana who said that the outage was locally devasting. Credit cards wouldn’t work for most of the businesses in town including at gas stations. Businesses that rely on software in the cloud for daily operations like hotels were unable to function. Bank ATMs weren’t working. Customers with CenturyLink landlines had spotty service and mostly could not make or receive phone calls. Worse yet, cellular service in the area largely died, meaning that CenturyLink must have been supplying the broadband circuits supporting the cellular towers.

CenturyLink reported that the outage was caused by a faulty networking management card in a Colorado data center that was “propagating invalid frame packets across devices”. It took the company a long time to isolate the problem, and the final fix involved rebooting much of the network electronics.

Every engineer I’ve spoken to about this says that in today’s world it’s hard to believe that it would take 2 days to isolate and fix a network problem caused by a faulty card. Most network companies operate a system of alarms that instantly notify them when any device or card is having problems. Further, complex networks today are generally supplied with significant redundancy that allows the isolation of troubled components of a network in order to stop the kind of cascading outage that occurred in this case. The engineers all said that it’s almost inconceivable to have a single component like a card in a modern network that could cause such a huge problem. While network centralization can save money, few companies route their whole network through choke points – there are a dozen different strategies to create redundancy and protect against this kind of outage.

Obviously none of us knows any of the facts beyond the short notifications issued by CenturyLink at the end of the outage, so we can only speculate about what happened. Hopefully the FCC enquiry will uncover the facts – and it’s important that they do so, because it’s always possible that the cause of the outage is something that others in the industry need to be concerned about.

I’m only speculating, but my guess is that we are going to find that the company has not implemented best network practices in the legacy telco network. We know that CenturyLink and the other big telcos have been ignoring the legacy networks for decades. We see this all of the time when looking at the conditions of the last mile network, and we’ve always figured that the telcos were also not making the needed investments at the network core.

If this outage was caused by outdated technology and legacy network practices then such outages are likely to recur. Interestingly, CenturyLink also operates one of the more robust enterprise cloud services in the country. That business got a huge shot in the arm through the merger with Level 3, with new management saying that all of their future focus is going to be on the enterprise side of the house. I have to think that this outage didn’t much touch that network, just more likely the legacy network.

One thing for sure is that this outage is making CenturyLink customers look for an alternative. A decade ago the local government in Cook County, Minnesota – the northern-most county in the state – was so frustrated by continued prolonged CenturyLink network outages that they finally built their own fiber-to-the-home network and found alternate routing into and out of the County. I talked to one service provider in Montana who said they’ve been inundated after this recent outage by businesses looking for an alternate to CenturyLink.

We have become so reliant on the Internet that major outages are unacceptable. Much of what we do everyday relies on the cloud. The fact that this outage extended to cellular outages, a crash of 911 systems and the failure of credit card processing demonstrates how pervasive the network is in the background of our daily lives. It’s frightening to think that there are legacy telco networks that have been poorly maintained that can still cause these kinds of widespread problems.

I’m not sure what the fix is for this problem. The FCC supposedly washed their hands of the responsibility for broadband networks – so they might not be willing to tackle any meaningful solutions to prevent future network crashes. Ultimately the fix might the one found by Cook County, Minnesota – communities finding their own network solutions that bypass the legacy networks.

A Strategy for Upgrading GPON

I’ve been asked a lot during 2018 if fiber overbuilders ought to be considering the next generation of PON technology that might replace GPON. They hear about the newer technologies from vendors and the press. For example, Verizon announced a few months ago that they would begin introducing Calix NGPON2 into their fiber network next year. The company did a test using the technology recently in Tampa and achieved 8 Gbps speeds. AT&T has been evaluating the other alternate technology, XGS-PON, and may be introducing it into their network in 2019.

Before anybody invests a lot of money in a GPON network it’s a good idea to always ask if there are better alternatives – as should be done for every technology deployed in the network.

One thing to consider is how Verizon plans on using NGPON2. They view this as the least expensive way to deliver bandwidth to a 5G network that consists of multiple small cells mounted on poles. They like PON technology because it accommodates multiple end-points using a single last-mile fiber, meaning a less fiber-rich network than with other 10-gigabit technologies. Verizon also recently began the huge task of consolidating their numerous networks and PON gives them a way to consolidate multi-gigabit connections of all sorts onto a single platform.

Very few of my clients operate networks that have a huge number of 10-gigabit local end points. Anybody that does should consider Verizon’s decision because NGPON2 is an interesting and elegant solution for handling multiple large customer nodes while also reducing the quantity of lit fibers in the network.

Most clients I work with operate PON networks to serve a mix of residential and business customers. The first question I always ask them is if a new technology will solve an existing problem in their network. Is there anything that a new technology can do that GPON can’t do? Are my clients seeing congestion in neighborhood nodes that are overwhelming their GPON network?

Occasionally I’ve been told that they want to provide faster connections to a handful of customers for which the PON network is not sufficient – they might want to offer dedicated gigabit or larger connections to large businesses, cell sites or schools. We’ve always recommended that clients design networks with the capability of large Ethernet connections external to the PON network. There are numerous affordable technologies for delivering a 10-gigabit pipe directly to a customer with active Ethernet. It seems like overkill to consider upgrading the electronics to all customers to satisfy the need of a few large customers rather than overlaying a second technology into the network. We’ve always recommended that networks have some extra fiber pairs in every neighborhood exactly for this purpose.

I’ve not yet heard an ISP tell me that they are overloading a residential PON network due to customer data volumes. This is not surprising. GPON was introduced just over a decade ago, and at that time the big ISPs offered speeds in the range of 25 Mbps to customers. GPON delivers 2.4 gigabits to up to 32 homes and can easily support residential gigabit service. At the time of introduction GPON was at least a forty-times increase in customer capacity compared to DSL and cable modems – a gigantic leap forward in capability. It takes a long time for consumer household usage to grow to fill that much new capacity. The next biggest leap forward we’ve seen was the leap from dial-up to 1 Mbps DSL – a 17-times increase in capacity.

Even if somebody starts reaching capacity on a GPON there are some inexpensive upgrades that are far less expensive than upgrading to a new technology. A GPON network won’t reach capacity evenly and would see it in some neighborhood nodes first. The capacity in a neighborhood GPON node can easily be doubled by cutting the size of the node in half by splitting it to two PONs. I have one client that did the math and said that as long as they can buy GPON equipment they would upgrade by splitting a few times – from 32 to 16 homes and from 16 homes to 8 homes, and maybe even from 8 to 4 customers before they’d consider tearing out GPON for something new. Each such split doubles capacity and splitting nodes three times would be an 8-fold increase in capacity. If we continue on the path of seeing household bandwidth demand double every three years, then splitting nods twice would easily add more than another decade to the life of a PON network. In doing that math it’s important to understand that splitting a node actually more than doubles capacity because it also decreases the oversubscription factor for each customer on the node.

AT CCG we’ve always prided ourselves on being technology neutral and vendor neutral. We think network providers should use the technology that most affordably fits the needs of their end users. We rarely see a residential fiber network where GPON is not the clear winner from a cost and performance perspective. We have clients using numerous active Ethernet technologies that are aimed at serving large businesses or for long-haul transport. But we are always open-minded and would easily recommend NGPON2 or XGS-PON if it is the best solution. We just have not yet seen a network where the new technology is the clear winner.

How Much Better is 802.11ax?

The new WiFi standard 802.11ax is expected to be ratified and released as a standard sometime next year. In the new industry nomenclature this now be called WiFi-6. A lot of the woes we have today with bandwidth in our home is due to the current 802.11ac standard that this will be replacing. 802.11ax will introduce a number of significant improvements that ought to improve home WiFi performance.

To understand why these improvements are important we need to first understand the shortcomings of the current WiFi protocols. The industry groups that developed the current WiFi standards had no idea that WiFi would become so prevalent and that the average home might have dozens of WiFi capable devices. The current problems all arise from a WiFi router trying to satisfy multiple demands for a data stream from multiple devices. Unlike cellular technologies, WiFi has no central traffic cop and every device in the environment can make an equal claim for connectivity. When a WiFi router has more demands for usage than it has available channels it pauses and interrupts all data streams until it chooses how to reallocate bandwidth. In a busy environment these stops and restarts can be nearly continuous.

The improvements from 802.11ax will all come from smarter ways to handle requests for connectivity from multiple devices. There is only a small improvement in overall bandwidth with a raw physical data rate of 500 Mbps compared to 422 for 802.11ac. Here are the major new innovations:

Orthogonal Frequency-Division Multiple Access (OFDMA). This improvement will likely have the biggest impact in a home. OFDMA can slice the few big existing WiFi channels into smaller channels, being called resource units. A router will be able to make multiple smaller bandwidth connections using resource units and avoid packet collision and the start/stop cycle of each device asking for primary connectivity.

Bi-Directional Multi-User MIMO. In the last few years we’ve seen home WiFi routers introduce MIMO, which uses multiple antennas to make connections to different devices. This solves one of the problems of WiFi by allowing multiple devices to download separate data streams at the same time without interference. But today’s WiFi MIMO still has one big problem in that the MIMO only work for downloading. Whenever there is a request for any device to use a channel for uploading, today’s MIMO pauses all the downloading streams. Bi-Directional MIMO will allow for 2-way data streams meaning that a request to upload won’t kill downstream transmissions.

Spatial Frequency Reuse. This will have the most benefit in apartments or in homes that have networked multiple WiFi routers. Today a WiFi transmission will pause for any request for connection, even for connections made to a neighbor’s router from the neighbor’s devices. Spatial Frequency Reuse doesn’t fix that problem, but it allows neighboring 802.11.ax routers to coordinate and to adjust the power of transmission requests to increase the chance that a device can connect to and maintain a connection to the proper router.

Target Wake Time. This will allow small devices to remain silent most of the time and only communicate at specific and pre-set times. Today a WiFi router can’t distinguish between a request from a smart blender and a smart TV, and requests from multiple small devices can badly interfere with the streams we care about to big devices. This feature will reduce, and distribute over time the requests for connectivity from the ever-growing horde of small devices we all have.

There’s no rush to go out and buy and 802.11ax router, although tech stores will soon be pushing them. Like all generations of WiFi they will be backwards compatible with earlier WiFi standards, but for a few years they won’t do anything differently than your current router. This is because all of the above features require updated WiFi edge devices that also contain the new 802.11ax standard. There won’t be many devices manufactured with the new standard even in 2019. Even after we introduce 802.11ax devices into our home we’ll continue to be frustrated since our older WiFi edge devices will continue to communicate in the same inefficient way as today.

Private 5G Networks

One of the emerging uses for 5G is to create private 5G cellular networks for large businesses. The best candidates for 5G technology are businesses that need to connect and control a lot of devices or those that need the low latency promised by the 5G standards. This might include businesses like robotized factories, chemical plants, busy shipping ports and airports.

5G has some advantages over other technologies like WiFi, 4G LTE and Ethernet that makes it ideal for communications rich environments. Cellular network can replace the costly and bulky hard-wired networks needed for Ethernet. It’s not practical to wire an Ethernet network to the hordes of tiny IoT sensors that are needed to operate a modern manufacturing factory. It’s also not practical to have a hard-wired network in a dynamic environment where equipment needs to be moved for various purposes.

5G holds a number of advantages over WiFi and 4G. Frequency slicing means that just the right amount of bandwidth can be delivered to every device in the factory, from the smallest sensor to devices that must upload or download large amounts of data. The 5G standard also allows for setting priorities by device so that mission critical devices always get priority over background devices. The low latency on 5G means that there can be real time coordination and feedback between devices when that’s needed for time-critical manufacturing devices. 5G also offers the ability to communicate simultaneously with a huge number of devices, something that is not practical or possible with WiFi or LTE.

Any discussion of IoT in the past has generally evoked discussion of factories with huge number of tiny sensors that monitor and control every aspect of the manufacturing process. While there have been big strides in developing robotized factories, that concept of a concentrated communications mesh to control the factories has not been possible until the 5G standard.

We are a few years away from having 5G networks that can deliver on all of the promised benefits of the standard. The big telecom manufacturers like Ericsson, Huawei, Qualcomm and Nokia along with numerous smaller companies are working on perfecting the technology and the devices that will support advanced IoT networks.

I read that an Audi plant in Germany is already experimenting with a private cellular network to control the robots that glue car components together. Its robot networks were hard-wired and were not providing fast enough feedback to the robots for the needed precision of the tasks. The company says it’s pleased with the performance so far. However, that test was not yet real 5G and any real use of 5G in factories is still a few years off as manufacturers perfect the wireless technology and perfect the sensor networks.

Probably the biggest challenge in the US will be finding the spectrum to make this work. In the US most of the spectrum that is best suited to operating a 5G factory are sold in huge geographic footprints and the spectrum will be owned by the typical large spectrum holders. Large factory owners might agree to lease spectrum from the large carriers, but they are not going to want those carriers to insert themselves into the design or operation of these complex networks.

In Europe there are already discussions at the various regulatory bodies on possibly setting aside spectrum for factories and other large private users. However, in this country to do so means opening the door to allowing the spectrum to be sold for smaller footprints – something the large wireless carriers would surely challenge. It would be somewhat ironic if the US takes the lead in developing 5G technology but then can’t make it work in factories due to our spectrum allocation policies.

Femtocells Instead of Small Cells?

I have just seen the future of broadband and it does not consist of building millions of small 5G cell sites on poles. CableLabs has developed a femtocell technology that might already have made the outdoor 5G small cell site technology obsolete. Femtocells have been around for many years and have been deployed in rural areas to provide a connection to the cellular network through a landline broadband connection. That need has largely evaporated due the ability to use cellphones apps to directly make WiFi calls.

The concept of a femtocell is simple – it’s a small box that uses cellular frequencies to communicate with cellular devices that then hands-off calls to a landline data connection. Functionally a femtocell is a tiny cell site that can handle a relatively small volume of cellular calls simultaneously.

According to CableLabs, deploying a femtocell inside a household is far more efficient that trying to communicate with the household from a nearby pole-mounted transmitter. Femtocells eliminate one of the biggest weaknesses of outdoor small cell sites – much of the power of 5G is lost in passing through the external walls of a home. Deploying the cellular signal from within the house means a much stronger 5G signal throughout a home, allowing for more robust 5G applications.

This creates what I think is the ultimate broadband network – one that combines the advantages of a powerful landline data pipe combined with both 5G and WiFi wireless delivery within a home. This is the vision I’ve had for over a decade as the ultimate network – big landline data pipe last mile and powerful wireless networks for connecting to devices.

It’s fairly obvious that a hybrid femtocell / WiFi network has a huge cost advantage over the deployment of outdoor small cell sites on poles. It would eliminate the need for the expensive pole-mounted transmitters – and that would eliminate the battles we’re having about the proliferation of wireless devices. It’s also more efficient to deploy a femtocell network – you would deploy only to those homes that want to the 5G features – meaning you don’t waste an expensive outdoor network to get to one or two customers. It’s not hard to picture an integrated box that has both a WiFi modem and a cellular femtocell, meaning the cost to get 5G into the home would be a relatively cheap upgrade to WiFi routers rather than deploying a whole new separate 5G network.

There are significant benefits for a home to operate both 5G and WiFi. Each standard has advantages in certain situations within the home. As much as we love WiFi, it has big inherent weaknesses.  WiFi networks bogs down, by definition, when there too many devices calling for a connection. Shuttling some devices in the home to 5G would reduce WiFi collisions and makes WiFi better.

5G also has inherent advantages. An in-home 5G network could use frequency slicing to deliver exactly the right amount of bandwidth to devices. It’s not hard to picture a network where 5G is used to communicate with cellphones and small sensors of various types while WiFi is reserved for communicating with large bandwidth devices like TVs and computers.

One huge advantage of a femtocell network is that it could be deployed anywhere. The cellular companies are likely to cherry pick the outdoor 5G network deployments only to neighborhoods where the cost of backhaul is affordable – meaning that many neighborhoods will never get 5G just like many neighborhoods in the northeast never got Verizon FiOS. You could deploy a hybrid femtocell to one customer on a block and still be profitable. Femtocells also eliminate the problems of homes that won’t have line-of-sight to a pole-mounted network.

This technology obviously favors those who have built fast broadband – that’s cable companies that have upgraded to DOCSIS 3.1 and fiber overbuilders. For those businesses this is an exciting new product and another new revenue stream to help replace shrinking cable TV and telephone networks.

One issue that would need to be solved is spectrum, since most of it is licensed to cellular companies. The big cable companies now own some spectrum, but smaller cable companies and fiber overbuilders own none. There is no particular reason why 5G inside a home couldn’t coexist with WiFi, with both using unlicensed spectrum, with some channels dedicated to each wireless technology. That would become even easier if the FCC goes through with plans to release 6 GHz spectrum as the next unlicensed band. The femtocell network could also utilize unlicensed millimeter wave frequency.

We’ll obviously continue to need outdoor cellular networks to accommodate roaming voice and data roaming, but these are already in place today. Rather than spend tens of billions to upgrade those networks for 5G data to homes, far less expensive upgrades can be made to augment those networks only where needed rather than putting multiple small cells on every city block.

It’s been my experience over forty years of watching the industry that in the long run the most efficient technology usually wins. If CableLabs develops the right home boxes for this technology, then the cable companies will be able blitz the market with 5G much faster, and for a far lower cost than Verizon or AT&T.

It would be ironic if the best 5G solution also happens to need the fastest pipe into the home. The decisions by big telcos to not deploy fiber over the last few decades might start looking like a huge tactical blunder. It looks to me like CableLabs and the cable companies might have found the winning 5G solution for residential service.

eSim

One of the big goals for 5G is to be able to use the technology to communicate with numerous devices other than cellphones and tablets. In order for that to happen the cellular industry is going to have to adopt eSim technology, which means creating virtual sim cards inside of devices rather than requiring the physical sim card that is used today in cellphones.

Traditional sim cards don’t play well in the IoT world. Many IoT devices will be tiny sensors that will be low power and that will be too small to hold a sim card. But probably more importantly, for IoT to grow as envisioned by the cellular carriers, customers are going to need an easy way to change wireless carrier without having to change a physical sim.

Picture the future smart home that has numerous smart devices that tie into a cellular network to get to the cloud. It’s likely that most devices you buy will come with a pre-paid subscription to some specific carrier, but that eventually that carrier will want homeowners to pay a monthly fee to continue the monitoring. I picture the nightmare where I might have devices that are monitored by each of the major cellular carriers, and each is going to want me to pay a monitoring fee to keep my devices connected to the cloud.

The only way most homes are going to agree to this vision of the world will be if they can migrate all of their devices to the same cellular network. And that means a homeowner (or farmer or factory owner) is going to want the option of homing all of their devices to the carrier of their choice. That’s where eSim comes in – it’s a virtual sim card that can be redirected at will by the customer without having to deal with physical sim cards. I envision sim manager software that will register and track all of my sim devices and that could move them en masse to a new carrier at my command.

Today’s sim card technology is a dinosaur and I liken it to the analog settop boxes that cable companies forced customers to rent from them. Cellular carriers have been extremely slow in accepting sim card technology because they know that having to physically change a sim card is a barrier that will stop some customers from changing service to another carrier. The big cellular companies say they have been working on eSim technology, but it’s been dragging slowly forward for years.

There are already products using eSim. For example, the Samsung Gear S2 smartwatch was the first commercial device to include eSim in 2016. Samsung used the eSim technology because there wasn’t room for a sim card. However, this is not an eSim card like I described above. A customer can’t change the carrier on smart watch that comes preset by Samsung. However, early eSim devices show that the technology works.

There are carriers in the country that are pushing for eSim. For example, smaller and regional cellular carriers like C-Spire and Ting are pushing for the technology. Some of the big cable companies are pushing for the technology.

What’s needed to make eSim work is a set of universal standards that would allow a customer to aim the eSim at the carrier of their choice. And that is going to take the cooperation of the big cellular companies. There is enough pressure on them that this change is likely to start happening over the next few years. Hopefully the eSim technology will just become part of the expected background technology that makes devices work on cellular networks, and that customers in the future will be able to easily decide their cellular carrier without the hassle of dealing with every cellular device in their home. My guess is that teenagers a decade from now will never have ever heard of a sim card and it will be just another obsolete technology.

Another Spectrum Battle

Back in July the FCC issued a Notice of Proposed Rulemaking seeking comments for opening up spectrum from 3.7 GHz to 4.2 GHz, known as the C-Band. As is happening with every block of usable spectrum, there is a growing tug-of-war between using this spectrum for 5G or using it for rural broadband.

This C-Band spectrum has traditionally been used to transit signals from satellites back to earth stations. Today it’s in use by every cable company that receives cable TV signals at a ‘big-dish’ satellite farm. The spectrum had much wider use in the past when it was used to deliver signal directly to customers using the giant 7 – 10 foot dishes you used to see in rural backyards.

This spectrum is valuable for either cellular data or for point-to-multipoint rural radio broadband systems. The spectrum sits in the middle between the 2.4 GHz and the 5.8 GHz used today for delivering most rural broadband. The spectrum is particularly attractive because of the size of the block, at 500 megahertz.

When the FCC released the NPRM, the four big satellite companies – Intelsat, SES, Eutelsat and Telesat – created the C-Band Alliance. They’ve suggested that some of their current use of this spectrum could be moved elsewhere. But where it’s not easy to move the spectrum, the group volunteered to be the clearing house to coordinate the use of C-Band for other purposes so that it won’t interfere with satellite use. The Alliance suggests that this might require curtailing full use of the spectrum near some satellite farms, but largely they think the spectrum can be freed for full use in most places. Their offer is seen as a way to convince the FCC to not force satellite companies completely out of the spectrum block.

I note that we are nearing a day when the need for the big satellite earth stations to receive TV might become obsolete. For example, we see AT&T delivering TV signal nationwide on fiber using only two headends and satellite farms. If all TV stations and all satellite farm locations were connected by fiber these signals could be delivered terrestrially. I also note this is not the spectrum used by DirecTV and Dish networks to connect to subscribers – they use the K-band at 12-18 GHz.

A group calling itself the Broadband Access Coalition (BAC) is asking the FCC to set aside the upper 300 megahertz from the band for use for rural broadband. This group is comprised of advocates for rural wireless broadband, including Baicells Technologies, Cambium Networks, Rise Broadband, Public Knowledge, the Open Technology Institute at New America, and others. The BAC proposal asks for frequency sharing that would allow for the spectrum to be used for both 5G and also for rural broadband using smart radios and databases to coordinate use.

Both the satellite providers and the 5G companies oppose the BAC idea. The satellite providers argue that it’s too complicated to share bandwidth and they fear interference with satellite farms. The 5G companies want the whole band of spectrum and tout the advantages this will bring to 5G. They’d also like to see the spectrum go to auction and dangle the prospect for the FCC to collect $20 billion or more from an auction.

The FCC has it within their power to accommodate rural broadband as they deal with this block of spectrum. However, recent history with other spectrum bands shows the FCC to have a major bias towards the promise of 5G and towards raising money through auctions – which allocates frequency to a handful of the biggest names in the industry.

The BAC proposal is to set aside part of the spectrum for rural broadband while leaving the whole spectrum available to 5G on a shared and coordinated basis. We know that in real life the big majority of all ‘5G spectrum’ is not going to be deployed in rural America. The 5G providers legitimately need a huge amount of spectrum in urban areas if they are to accomplish everything they’ve touted for 5G. But in rural areas most bands of spectrum will sit idle because the spectrum owners won’t have an economic use for deploying in areas of low density.

The BAC proposal is an interesting mechanism that would free up C-Band in areas where there is no other use of the spectrum while still fully accommodating 5G where it’s deployed. That’s the kind of creating thinking we need to see implemented.

The FCC keeps publicly saying that one of its primary goals is to improve rural broadband – as I wrote in a blog last week, that’s part of their primary stated goals for the next five years. This spectrum could be of huge value for point-to-multipoint rural radio systems and would be another way to boost rural broadband speeds. The FCC has it within their power to use the C-Band spectrum for both 5G and for rural broadband – both uses can be accommodated. My bet, sadly, is that this will be another giveaway to the big cellular companies.

When Will Small ISPs Offer Wireless Loops?

I wrote last week about what it’s going to take for the big wireless companies to offer 5G fixed wireless in neighborhoods. Their biggest hurdle is going to be the availability of fiber deep inside neighborhoods. Today I look at what it would take for fiber overbuilders to integrate 5G wireless loops into their fiber networks. By definition, fiber overbuilders already build fiber deep into neighborhoods. What factors will enable fiber overbuilders to consider using wireless loops in those networks?

Affordable Technology. Number one on the list is cheaper technology. There is a long history in the wireless industry where new technologies only become affordable after at least one big company buys a lot of units. Fifteen years ago the FCC auctioned LMDS and MMDS spectrum with a lot of hoopla and promise. However, these spectrum bands were barely used because no big companies elected to use them. The reality of the manufacturing world is that prices only come down with big volumes of sales. Manufacturers need to have enough revenue to see them through several rounds of technical upgrades and tweaks, which are always needed when fine-tuning how wireless gear works in the wild.

Verizon is the only company talking about deploying a significant volume of 5G fixed wireless equipment. However, their current first-generation equipment is not 5G compliant and they won’t be deploying actual 5G gear for a few years. Time will tell if they buy enough gear to get equipment prices to an affordable level for the rest of the industry. We also must consider that Verizon might use proprietary technology that won’t be available to others. The use of proprietary hardware is creeping throughout the industry and can be seen with gear like data center switches and Comcast’s settop boxes. The rest of the industry won’t benefit if Verizon takes the proprietary approach – yet another new worry for the industry.

Life Cycle Costs. Anybody considering 5G also needs to consider the full life cycle costs of 5G versus fiber. An ISP will need to compare the life cycle cost of fiber drops and fiber electronics versus the cost of the 5G electronics. There are a couple of costs to consider:

  • We don’t know what Verizon is paying for gear, but at the early stage of the industry my guess is that 5G electronics are still expensive compared to fiber drops.
  • Fiber drops last for a long time. I would expect that most of the fiber drops built twenty years ago for Verizon FiOS are still going strong. It’s likely that 5G electronics on poles will have to replaced or upgraded every 7 – 10 years.
  • Anybody that builds fiber drops to homes knows that over time that some of those drops are abandoned as homes stop buying service. Over time there can be a sizable inventory of unused drops that aren’t driving any revenue – I’ve seen this grow to as many as 5% of total drops over time.
  • Another cost consideration is maintenance costs. We know from long experience that wireless networks require a lot more tinkering and maintenance effort than fiber networks. Fiber technology has gotten so stable that most companies know they can build fiber and not have to worry much about maintenance for the first five to ten years. Fiber technology is getting even more stable as many ISPs are moving the ONTs inside the premise. That’s going to be a hard to match with 5G wireless networks with differing temperatures and precipitation conditions.

We won’t be able to make this cost comparison until 5G electronics are widely available and after a few brave ISPs suffer through the first generation of the technology.

Spectrum. Spectrum is a huge issue. Verizon and other big ISPs are going to have access to licensed spectrum for 5G that’s not going to be available to anybody else. It’s likely that companies like Verizon will get fast speeds by bonding together multiple bands of millimeter wave spectrum while smaller providers will be limited to only unlicensed spectrum bands. The FCC is in the early stages of allocating the various bands of millimeter wave spectrum, so we don’t yet have a clear picture of the unlicensed options that will be available to smaller ISPs.

Faster speeds. There are some fiber overbuilders that already provide a gigabit product to all customers, and it’s likely over time that they will go even faster. Verizon is reporting speeds in the first 5G deployments between 300 Mbps and a gigabit, and many fiber overbuilders are not going to want a network where speeds vary by local conditions, and from customer to customer. Wireless speeds in the field using millimeter wave spectrum are never going to be as consistently reliable and predictable as a fiber-based technology.

Summary. It’s far too early to understand the potential for 5G wireless loops. If the various issues can be clarified, I’m sure that numerous small ISPs will consider 5G. The big unknowns for now are the cost of the electronics and the amount of spectrum that will be available to small ISPs. But even after those two things are known it’s going to be a complex decision for a network owner. I don’t foresee any mad rush by smaller fiber overbuilders to embrace 5G.

AT&T and Connected Vehicles

AT&T just released a blog talking about their connected vehicle product. This blog paints a picture of where AT&T is at today and where they hope to be headed into the future in this market niche.

For a company like AT&T, the only reason to be excited about a new market niche is the creation of a new revenue stream. AT&T claims to have 24 million connected cars on its network as of the end of 3Q 2018. They also claim 3 million additional connected fleet vehicles. They also have over 1 million customers who are buying mobile WiFi hotspots from AT&T.

What does that look like as a revenue stream? AT&T has relationships with 29 global car manufacturers. Most new cars today come with some kind of connectivity plan that’s free to a car buyer for a short time, usually 3 to 6 months. When the free trial is over consumers must subscribe in order to retain the connectivity service.

As an example of how this works, all new Buicks and Fiats come with AT&T’s UConnect Access for a 6-month free trial period. This service provides unlimited broadband to the vehicle for streaming video or for feeding the on-board mapping system. After the trial customers must subscribe to the service at a monthly rate of $14.99 per month – or they can buy a la carte for connectivity at $9.99 per day or $34.99 per month.

In the blog AT&T touts a relationship with Subaru. The company provides a trial subscription to Starlink that provides on-board navigation on a screen plus safety features like the ability to call for roadside assistance or to locate a stolen vehicle. Subaru offers different plans for different vehicles that range from a Starlink trial of between 4-months and 3-years. Once the trial is over the cost of extending Starlink is $49 for the first year and then $99 per year to extend just the security package or $149 per year to extend the whole service. Starlink is not part of AT&T, so only some portion of this revenue goes to the carrier.

I wonder how many people extend these free trials and become paying customers? I have to think that the majority of the AT&T connected vehicles are under the Starlink relationship which has been around for many years. Families that drive a lot and watch a lot of video in a vehicle might find the UConnect Access to be a much better alternative than using cellular data plans. People who want the feature of locating their car if stolen might like the Starlink. However, most drivers probably don’t see a value in these plans. Most of the features offered in these packages are available as part of everybody’s cellular data plans using the Bluetooth connectivity in these vehicles.

The vehicle fleet business, however, is intriguing. Companies can use this connectivity to keep drivers connected to the home office and core software systems. This can also be done with cellphones, but I can think of several benefits to building this directly into the vehicle.

The second half of their blog discusses the possibility for 5G and automated cars. That’s the future revenue stream the company is banking on, and probably one of their biggest hopes for 5G. They have two hopes for 5G vehicle connectivity:

  • They hope to provide the connectivity between vehicles using 5G and the cloud. They believe that cars will be connected to the 5G network in order to ‘learn’ from other vehicle’s driving experience in the immediate vicinity.
  • They also hope to eventually provide broadband to driverless cars where passengers will be interested in being connected while traveling.

The first application of connecting nearby vehicles is no guarantee. It all depends on the technology path chosen to power driverless vehicles. There is one school of thought that says that the majority of the brains and decision making will be done by on-board computers, and if cars connect to nearby vehicles it will be through the use of on-board wireless communication. AT&T is hoping for the alternate approach where that connectivity is done in the cloud – but that’s going to require a massive investment in small cell sites everywhere. If the cloud solution is not the preferred technology then companies like AT&T will have no incentive to place 5G cell sites along the millions of miles of roads.

This is one of those chicken and egg situations. I liken it to smart city technology. A decade ago many predicted that cities would need mountains of fiber to support smart cities – but today most such applications are being done wirelessly. Any company banking on a fiber-based solution got left behind. At this point, nobody can predict the technology that will ultimately be used by smart cars. However, since the 5G technology needs the deployment of a massive ubiquitous cellular network, the simpler solution is to do it some other way.

FCC Proposes New WiFi Spectrum

At their recent open meeting the FCC announced that it is proposing to use up to 1,200 megahertz of the spectrum band between 5.925 GHz and 7.125 GHz (being referred to as the 6 GHz band) as unlicensed spectrum. This is a bold proposal and more than doubles the total amount of bandwidth that would be available for WiFi.

However, their proposal comes with several proposed caveats that will have to be considered before expecting the spectrum to be useful everywhere for rural broadband. First, the FCC proposal is that any place where the spectrum is currently being used for Broadcast Auxiliary Service and Cable TV Relay service that the spectrum only be licensed for indoor use.

In those places where the spectrum is being used heavily for point-to-point microwave service, the outdoor use would have to be coordinated with existing users by use of an automated frequency coordination system, or a database, that would ensure no interference. I assume one of the rules that must be clarified is a definition of what constitutes ‘heavy’ existing point-to-point use of the spectrum.

In places where there are no existing uses of the spectrum it sounds like it would be available for outdoor use as well as indoor use.

This band of spectrum would be a great addition to networks that provide point-to-multipoint fixed wireless service. The spectrum will have a slightly smaller effective delivery area than the 5.8 GHz WiFi ISM band now widely in use. The 5.8 GHz spectrum is already the workhorse in most fixed wireless networks and adding additional spectrum would increase the bandwidth that can be delivered to a given customer in systems that can combine spectrum from various frequencies.

The key is going to be to find out what the two restrictions mean in the real world and how many places are going to have partial or total restrictions of the spectrum. Hopefully the FCC will produce maps or databases that document the areas they think are restricted using their two proposed criteria.

This spectrum would also be welcome indoors and would add more channels for home WiFi routers, making it easier to cover a home and provide coverage to greater numbers of devices simultaneously. The FCC hopes the spectrum can be used everywhere for indoor use, but they are asking the industry if that causes any problems.

Note that this is not an order, but a proposal. The FCC released a draft of the Notice of Proposed Rulemaking on October 2, and after this vote they should soon publish a schedule for a public comment period from the industry and other interested parties.

WiFi has been a gigantic boon to the economy and it’s a great move by the FCC to provide additional WiFi spectrum, even if this turns out to be largely restricted to indoor use. However, everybody associated with rural broadband is going to hope this is decided soon and that the frequency is added to the toolbox for serving fixed wireless in rural areas.

Interestingly, this spectrum would make it easier for ISPs that claimed they can achieve universal 100 Mbps speeds for fixed wireless in the recent reverse CAF II auctions. Perhaps some of those companies were counting on this spectrum as a way to meet that claim.

It’s always hard to predict the speed of the FCC process. I see that various WiFi-related organizations are hoping this means use of the spectrum as early as sometime next year. However, we’ve often seen the FCC proceed a lot slower than what the industry wants and one of factors the FCC is going to take into consideration is the pushback from cellular companies that will likely want this to be licensed spectrum. Unfortunately, the large cellular companies seem to be getting everything on their wish list from this FCC, so we’ll have to see how that plays out.

I imagine that device manufacturers are already considering this in the design of new hardware, but still need to know more before finalizing software. This is perhaps the best announcement so far from this FCC. The benefit to the country from WiFi is gigantic and this will vastly strengthen the advantages of WiFi.