Flexible Numerology

This is the last in a series that looks at the underlying technologies that will create improvements for 5G – I looked previously at MIMO antennas and network slicing. Today I look at flexible numerology. Flexible numerology, in a nutshell involves new techniques that allow for changing the width of data channels in a frequency band.

The easiest way to understand the issues involved is to think back at how we used wireless devices in the past. Anybody that ever fiddled with an older 802.11n WiFi router using 2.4 GHz remembers directing different devices in the home to channels 1,6 or 11. While the 2.4 GHz band has 11 separate available channels, most wireless router manufacturers limited the use to those three channels in order to avoid cross-channel interference. They knew that if a home only used these three channels they’d likely not see such interference and would get the maximum performance on each channel. However, the decision to use only those three channels limited the amount of bandwidth that can be utilized. In peak usage situations only 3 of the 11 channels of 2.4 GHz are carrying bandwidth – avoiding interference meant not using much of the available frequency.

It’s easy to think of the channels within a wireless frequency as separate channels, because that’s how they are defined at the FCC. Cable companies are able to create distinct channels of frequency within the controlled confines of a coaxial cable in way to limit interference between channels. But when transmitted in the wild through the air all sorts of interference arises. Anybody old enough to remember watching TV in the 50s can remember times when you could see ghosts of a nearby channel when you were watching one of the low channel numbers.

Our cellular networks have been designed in a similar fashion to the WiFi channels. Within each of the frequencies used for cellular service are channels predefined by the FCC, with buffers between each channel. However, even with the buffers there is cross-channel interference between neighboring channels, and so the cellular carriers have selectively chosen to spread the actual use of frequency in ways similar to how we used channels 1,6 and 11 for WiFi.

Flexible numerology is new goal for 5G that was published with the 3GPP Release 15 standard. Flexible numerology is part of a system for allocating frequency in a new way that is intended to get the most and best use of the spectrum.

5G will use the same underlying method for modulating signals as 4G LTE – orthogonal frequency division multiplexing (OFDM). The OFDM scheme is the current way to try to get the best use of frequency and with OFDM a data stream is split across several separate narrowband channels to reduce interference, much in the same way that we used the three channels of WiFi.

Flexible numerology is going to give the cell site the option to create much smaller narrowband channels within the channels described in the OFDM standard. That’s the magic sauce that will enable 5G to communicate with huge number of devices without creating massive interference.

Consider a situation of two users at a 5G site. One is an IoT sensor that wants to trickle small amounts of data to the network and the other is a gamer that needs bursts of huge amounts of bandwidth. In the LTE network both devices would be given a narrowband channel – the IoT device for perhaps a tiny amount of time and the gamer for longer bursts. That’s an inefficient use of frequency since the IoT device is transmitting only a tiny amount of data. For even the short time that the cell site communicates with that device, in an LTE network the device commands as much bandwidth as any other user.

Flexible numerology will allow assigning a tiny slice of frequency to the IoT device. For example, the cell site might elect to assign 1/64th of a channel to the IoT device, meaning the remaining 63/64ths of the frequency can be assigned to some other purpose to be used at the same time that the IoT device is demanding bandwidth. In a 5G network the IoT device might grab a tiny slice of frequency for a short period of time and barely create a ripple in the overall use of frequency at the cell site.

The cellular network might treat the gamer the same as today but has numerous new options with flexible numerology to improve the gaming performance. It might separate sent and received data and size each path according to needs. It might create a connection for a longer time period than normal to efficiently transmit the needed packages. Essentially, flexible numerology lets the cell site treat every customer differently depending upon their specific needs.

This implementation of flexible numerology for 5G is complicated and will require new algorithms that ultimately get built into the chips for 5G devices. It’s always interesting to watch how new standards are implemented in the industry. I’ve seen numerous papers on the web over the last few months from labs and universities looking at the challenges of flexible numerology. These investigations will eventually get translated into lab trials of devices, and, if those trials are successful make it into the production for both cell sites and cellular devices. This is why a new standard like 5G can’t be implemented immediately. Standards define the problem, and then scientists, engineers and manufacturers take a shot at making the new ideas work (or sometimes find out that they don’t work). It’s likely to be years until the flexible numerology is made to work good enough to be in everyday use in cell sites – but when it does the utilization of frequency will be significantly improved, which is a key goal for 5G.

The Millimeter Wave Auctions

The FCC will soon hold the auction for two bands of millimeter wave spectrum. The auction for the 28 GHz spectrum, referred to as Auction 101, will begin on November 14 and will offer 3,072 licenses in the 27.5 to 28.35 GHz band. The auction for 24 GHz, referred to as Auction 102, will follow at the end of Auction 101 and will offer 2,909 licenses in the 24.25 to 24.45 GHz and the 24.75 to 25.25 GHz bands.

This is the spectrum that will support 5G high-bandwidth products. The most unusual aspect of this auction is that the FCC is offering much wider channels than ever before, making the spectrum particularly useful for broadband deployment and also for the frequency slicing needed to serve multiple customers. The Auction 101 includes two blocks of 425 MHz and is being auctioned by County. Auction 102 will include seven blocks of 100 MHz and will be auctioned by Partial Economic Areas (PEA). PEAs divide the country into 416 zones, grouped by economic interest. They vary from the gigantic PEA that encompasses all of the New York City and the surrounding areas in Connecticut and New Jersey to PEAs that are almost entirely rural.

That means that every part of the country could see as many as seven different license holders, assuming that somebody pursues all of the spectrum. It’s likely, though, that there will be rural areas where nobody buys the spectrum. It will be interesting to look at the maps when the auctions are done.

This is the spectrum that can be used to support the fixed wireless broadband like Verizon is now deploying from poles. The spectrum has the capability of delivering big bandwidth, but for relatively short distances of 1,000 feet or more. The spectrum can also be used as a focused beam to deliver several gigabits of bandwidth for a mile to a single point, such as what Webpass is currently doing to serve downtown high-rise apartment buildings.

The industry consensus is that this spectrum will find limited use in rural areas for now since it’s hard, with existing technology, to deploy a 5G transmitter site that might only reach a few potential customers.

The FCC has released the names of the companies that will be bidding in the auction. As expected the big cellular companies are there and AT&T, Verizon and T-Mobile are bidding. Absent is Sprint, but the speculation is that they are relying on the merger with T-Mobile and have elected to sit out the auction.

The big telcos are also in the auctions with AT&T, Verizon, Frontier and Windstream all participating. Absent is CenturyLink, which further strengthens the belief that they are no longer pursuing residential broadband.

The only cable company of any size in the auction is Cox Communications. The other big companies like Comcast, Charter, Altice and many others are sitting out the auction. It doesn’t make sense for a cable company to deploy the spectrum where they are already the incumbent broadband provider. Wireless technology for end users would complete directly with their own networks. Since Cox is privately held it’s hard to know their plans, but one use of the spectrum would be to expand in the areas surrounding their current footprint or to move into new markets. It’s costly to expand their hybrid-fiber networks and 5G wireless might be a cheaper way to move into new markets.

There are some rural companies that are bidding for spectrum. It’s hard to know if the rural telcos and cooperatives on the list want to use the spectrum to enhance broadband in their own footprint or if they want to use the spectrum to expand into larger nearby markets. One of the most interesting companies taking part in both auctions is US Cellular. They are the fifth largest cellular company after the big four and serve mostly rural markets. They’ve already made public announcements about upgrading to the most current version of 4G LTE and it will be interesting to see how they use this spectrum.


One of the things that I’ve always loved with our industry is that there are dozens of new acronyms to learn every year – and that’s the result of the industry always moving in new directions. The latest new acronym for me is PropTech, meaning telecom technology designed to benefit large buildings. There are now numerous companies, including well-funded start-ups, that are specializing in bringing broadband and upgrading other technology in buildings.

It’s been interesting to watch the growth of the industry over time. For many years the telecom focus for large buildings was bringing a competitive cable TV product into buildings, usually delivered by satellite.

When broadband was first introduced in the late 90s and speeds were still slow, tenants were able to get sufficient broadband from the cable or telephone incumbent. The first place we saw a demand for bigger bandwidth was in high rises housing big corporate clients. This was an area of focus for the telcos and the big CLECs that arose in the late 1990s. CLECs were measured by how many buildings they had lit with fiber – and the numbers were low, with only a handful of large buildings connected in each major city.

There were cost barriers for constructing downtown fiber – construction costs were high, gaining access to entrance facilities was a challenge and there was no easy technology for stringing fiber inside older buildings – so the number of fiber-wired buildings remained relatively small. Around 2000 we started to see newly constructed residential and business high rises come wired with fiber. But getting fiber into older buildings remained a challenge. I have numerous clients that built fiber to whole cities before 2010 but bypassed the high rises and large apartment complexes.

This started changing a decade ago as we saw new technologies aimed at more easily rewiring older buildings. Probably the most important breakthrough was flexible fiber that could easily bend around corners, allowing fiber-wiring schemes that could unobtrusively hide fiber in the corners of ceilings. Since then we’ve seen other improvements that make it easier and affordable to service larger buildings such as the use of G.Fast to distribute broadband using existing copper wiring.

PropTech is now taking real estate technology to the next level. Broadband is still the primary focus today, and building owners want fast broadband for tenants. But PropTech goes far beyond just broadband. Landlords now want to provide networked WiFi in common areas. Landlords want cellular boosters to provide better cellphone coverage for tenants. Buildings owners want to tout security and want security cameras in parking and other common areas that can be accessed by tenants. We’re seeing landlords now adding smart-home technology into upscale units. We’re also seeing buildings with business tenants constructing sophisticated data center rooms rather than the old wiring closets that used to house electronics.

Some of the new technology is designed to help landlords control their own operating expenses. This includes things like sensors and smart meters aimed at minimizing power costs. New buildings are going green, often generating much or all of their own energy needs – all supported by a robust telecom infrastructure.

Convincing landlords to spend the capital to adopt PropTech isn’t always easy. PropTech business plans stress new revenue streams from providing broadband, new revenues from increased rents and cost-savings as a way to pay for upgrades. The ultimate value to a landlord is the increased value of the property from modernizing. Some PropTech companies are even bringing the funding required to pay for the upgrades, making it easy for a landlord to say yes.

PropTech is creating some interesting changes in urban broadband. For many years the best broadband in cities was found in single family homes. But today some of the best networks and fastest data speeds are found in the high rises – where just a few years ago renters suffered from slow broadband and poor cell phone coverage.

A Better WiFi?

Regardless of the kind of ISP service you buy, almost every home network today uses WiFi for the last leg of our broadband network. Many of the broadband complaints ISPs hear about are actually problems with WiFi and not with the underlying broadband network serving the home.

Luckily the engineers that support the WiFi standards don’t sit still and are always working to improve the performance of WiFi. The latest effort was kicked off a few weeks ago when the 802.11 Extremely High Throughput Study Group of the IEEE initiated an effort to look for ways to improve peak throughput for WiFi networks.

This group will be investigating two issues. First, they want to find ways to increase peak throughput on WiFi for big data applications like video streaming, augmented reality and virtual reality. The current WiFi standard doesn’t allow for a prioritization of service and the device in your home with the lowest bandwidth requirement can claim the same priority for grabbing the WiFi signal as the most data-intensive application. This is key feature baked into the WiFi standard that was intended to allow the WiFi network to communicate simultaneously with multiple users and devices.

The Study Group will also be looking latency. We are now seeing applications in the home like immersive gaming that require extremely low latency, which is difficult to achieve on a WiFi network. Immersive gaming requires fast turnaround of packets to and from the gamer. The sharing nature of WiFi means that a WiFi network will interrupt a stream to a gamer when it sees demand from another device. Such interruptions are quick, but multiple short interruptions means a big data stream stops and starts and packets get lost and have to be resent. Changing this will be a big challenge because the pauses taken to accommodate multiple applications is they key characteristic of the sharing nature of WiFi.

This Study Group effort is a perfect example of how standards change over time. They are trying to accommodate new requirements into an existing technology. We’ve never had applications in the home environment that require the combination of dedicated bandwidth and extremely low latency. In a business environment any application of this nature would typically be hard-wired into a network and not use WiFi. However, businesses now also want mobile performance for applications like augmented reality that must be supported wirelessly.

The Study Group is taking the first step, which is to define the problem to be solved. That means looking in detail at how WiFi networks operates when asked to handle big data applications in a busy environment. This deep look will let the engineers more specifically define the exact way that WiFi interferes with ideal network performance. If they have one, the Study Group might suggest specific solutions to fix the identified problems, but it’s possible they won’t have one.

The end result of the work from the Study group is a detailed description of the problem. In this case they will identify the specific aspects of the current WiFi specifications that are interfering with the desired performance. The Group will also specifically define the hoped-for results that would come with a change in the WiFi standard. This kind of document gives the whole industry a roadmap and set of specific goals to tackle, and interested labs at universities and manufacturers around the world will tackle the problem defined by the Study Group.

Most people in the industry probably view standards as a finished product, as a specific immutable description of how a technology works. However, almost the exact opposite is true and standards are instead a list of performance goals. As engineers and scientists find ways to satisfy the goals those goals the standards are amended to include the new solutions. This is done publicly so that all of the devices using the protocol are compatible.

I just had this same discussion a few days ago concerning the 5G standards. At this early stage of 5G development what’s been agreed upon is the overall goals for the new wireless protocol. As various breakthroughs are achieved to meet those goals the standards will be updated and amended. The first set of goals for 5G are a high-level wish list of hoped-for performance. Over the next decade the 5G standard will be modified numerous times as technical solutions are found to help to achieve those performance goals. It’s possible that some of the goals will never be met while others will be surpassed, but any given time the 5G ‘standard’ will be a huge set of documents that define the current agreed-upon ways that must be followed by anybody making 5G gear.

This Work Group has their work cut out for them, because the issues that are interfering with large dedicated data connections or that are introducing latency into WiFi are core components of the original WiFi specification. When the choice was made to allow WiFi to share bandwidth among all users it made it difficult, and maybe impossible to somehow treat some packets better then the rest. I’m glad to know that there are engineers who are always working ahead of the market looking to solve such problems.

False Advertising for 5G

As has been expected, the wireless carriers are now actively marketing 5G cellular even though there are no actual 5G deployments. The marketing folks are always far in front of the engineers and are proclaiming 5G today much in the same way that they proclaimed 4G long before it was available.

The perfect case in point is AT&T. The company announced the launch of what they are calling 5G Evolution in 239 markets. They are also claiming they will be launching what they are calling standards-based 5G in at least 19 cities in early 2019.

The 5G Evolution product doesn’t contain any part of the new 5G standards. Instead, 5G Evolution is AT&T’s deployment of 4G LTE-Advanced technology, which can be characterized as their first fully-compliant 4G product. This is a significant upgrade that they should be proud of, but I guess their marketing folks would rather call this an evolutionary step towards 5G rather than admit that they are finally bringing mature 4G to the market – a claim they’ve already been making for many years.

What I find most annoying about AT&T’s announcement is the claim that 5G Evolution will “enable peak theoretical wireless speeds for capable devices of at least 400 megabits per second”, although their footnote goes on to say that “actual speeds are lower and will vary”. The 4G standard has been theoretically capable of speeds of at least 300 Mbps in a lab setting since the standard was first announced – but that theoretical speed has no relevance to today’s 4G network that generally delivers an average 4G speed of less than 15 Mbps.

This is like having a fiber-to-the-home provider advertise that their product is capable of speeds of 159 terabits per second, although actual speeds might be something less (that’s the current fastest speed achieved on fiber by scientists at the NICT Network System Research Institute in Japan). The intent of the statement on the AT&T website is clearly aimed at making people think they will soon be getting blazingly fast cellular data – which is not true. This is the kind of false advertising that is overstating the case for 5G (and in this case for 4G) that is confusing the public, politicians and regulators. You can’t really blame policy-makers for thinking that wireless will soon be the only technology we will need when the biggest wireless provider shamelessly claims speeds far in excess of what they will be ever be deploying.

AT&T’s second claim of launching standards-based mobile 5G in 19 markets is a little closer to the truth, but is still not 5G cellular. That service is going to deploy millimeter spectrum hotspots (a technology that is being referred to as Mi-Fi) in selected locations in 19 cities including Las Vegas, Los Angeles, Nashville, Orlando, etc.

These will be true hotspots, similar to what we see in Starbucks, meaning that users will have to be in the immediate vicinity of a hotspot to get the faster service. Millimeter wave hotspots have an even shorter propagation distance than normal WiFi hotspots and the signal will travel for a few hundred feet, at best. The hotspot data won’t roam and will only work for a user while they stay in range of a given hot spot.

AT&T hasn’t said where this will be deployed, but I have to imagine it will be in places like big business hotels, convention centers and indoor sports arenas. The deployment serves several purposes for AT&T. In those busy locations it will provide an alternate source of broadband for AT&T customers who have a phone capable of receiving the Mi-Fi signal. This will relieve the pressure on normal cellular data locally, while also providing a wow factor for AT&T customers that get the faster broadband.

However, again, AT&T’s advertising is deceptive. Their press releases make it sound like the general public in these cities will soon have much faster cellular data, and they will not. Those with the right phone that find themselves in one of the selected venues will see the faster speeds, but this technology will not be deployed to the wider market in these cities. Millimeter wave hotspots are an indoor technology and not of much practical use outside. The travel distances are so short that a millimeter wave hot spot loses a significant percentage of its strength in the short distance from a pole to the ground.

I can’t really blame the marketing folks at AT&T for touting imaginary 5G. It’s what’s hot in the marketplace today and what the public has been primed to expect. But just like the false hype when 4G was first introduced, cellular customers are not on the verge of seeing blazingly fast cellphone service in the places they live and work. This advertising seems to be intended to boost the AT&T brand, but it also might be defensive since other cellular carriers are making similar claims.

Unfortunately, this kind of false advertising plants the notion for politicians and policy-makers that cellular broadband will soon be all we will need. That’s an interesting corporate tactic to take by AT&T which is also building more fiber-to-the-premise right now than anybody else. These false claims seems to be most strongly competing with their own fiber broadband. But as I’ve always said, AT&T wears many hats and I imagine that their own fiber folks are as annoyed by this false advertising as the rest of us in the industry.

The 5G Summit

There was recently a 5G Summit held at the White House to discuss how the administration could encourage the public sector to deploy 5G as quickly as possible. The purpose of the summit was summarized well by Larry Kudlow, the director of the National Economic Council who said the administration’s approach to the issue is ‘American first, 5G first”.

Kudlow went on to say that the administration wants to give the wireless industry whatever they need to deploy 5G quickly. The FCC recently took a big step in that direction by speeding up and cutting the costs for attaching 5G small cell sites to poles and other infrastructure in the right-of-way.

There are a few other ways that were mentioned about how the administration could foster 5G deployment. David Redl, the head of the NTIA called for the government to make the needed spectrum available for 5G. The FCC is in the process of having an auction for spectrum in the 25 GHz and 28 GHz bands. The FCC is also working towards finalizing rules for the 3.5 GHz and 3.7 GHz spectrum (the 3.5 GHz CBRS band will be the subject of tomorrow’s blog).

I hope that the fervor to promote 5G doesn’t result in giving all of the new spectrum to the big wireless carriers. One of the best things the FCC ever did was to set aside some blocks of spectrum for public use. This fueled the WiFi technology sector and most homes now have WiFi networks. The spectrum also powers the fixed wireless technology that is bringing better broadband to rural America. While 5G is important, the administration and the FCC need to set aside more public spectrum to allow for innovation and broadband deployment outside of the big ISP sector.

I found this summit to be intriguing because it’s the first time I recall the government so heavily touting a telecom technology before it was introduced into the marketplace. There was mention in the Summit that the US is in a race with China to deploy 5G, but I’ve never seen anybody explain how that might give China an advantage over the US. China is far behind the US in terms of landline broadband and it makes sense for them (and much of the rest of the world) to stress wireless technologies.

There certainly was no similar hoopla when Verizon first announced the widespread deployment of fiber – an important milestone in the industry. In fact, at the time the press and Wall Street said that Verizon was making a mistake. It’s interesting to see that Verizon is again the market leader and is the only company, perhaps aside from T-Mobile, that has announced any plans to deploy 5G broadband. It’s worth looking back in history to remember that no other big ISPs followed Verizon’s lead and for over a decade the only other fiber to residences was built by small telcos, municipalities and small overbuilders.

Even if the government makes it as easy as possible to deploy 5G, will other big ISPs follow Verizon into the business? For now, AT&T has clearly decided to pass on the technology and is instead investing in fiber to homes and businesses. The big cable companies have shown no interest in the technology. The cellular companies will upgrade mobile networks to 5G but that’s expected to happen incrementally over a decade and won’t be a transformational technology upgrade. 4G LTE is still expected to be the wireless workhorse for many years to come.

There was one negative issue mentioned at the Summit by Rep. Greg Walden of Oregon. While praising efforts to deploy 5G he also said that we needed to take steps to protect the supply chain for 5G. Currently the FCC has precluded the use of any federal funds to buy technology manufactured by Huawei. But a more pressing issue is the current tariffs on China that are inflating the cost of 5G electronics – something that will be a barrier to deployment if they remain in place for very long.

It’s likely that the Summit was nothing more than politicians climbing onto a popular bandwagon. There has been enough hype about 5G that much of the public views it as a cutting-edge technology that will somehow transform broadband. We’re going to have to watch the Verizon deployment for a while, though, to see if that is true.

The administration has it within their power to create more benefits for companies willing to invest in 5G. However, helping huge companies like Verizon, which doesn’t need the help, is not likely going to bring 5G to more homes. And federal money won’t transform 5G into a technology that can benefit rural America, since 5G requires a robust fiber network. I just hope this doesn’t signal more giveaways to the giant ISPs – but if the FCC’s small cell order is any indicator, that might be all it means.

Network Slicing

Almost every PowerPoint I’ve seen about 5G cellular networks talks about network slicing. This is a new networking term unique to 5G. This is the second article looking at new features of 5G, with the first being a blog on Massive MIMO.

Cellular networks are now expected to make multiple simultaneous connections with different characteristics. The examples used in most presentations explain how a cellular network should be able serve traditional cellular voice, more robust cellular data, IoT monitoring and connecting to self-driving cars. Each of these applications requires connections with different bandwidth, latency, security etc. The cellular network will be expected to immediately recognize the required need and respond appropriately.

It’s a challenge because of the diverse nature of each kind of network demand. For example, an IoT network will be comprised of huge numbers of devices mostly in fixed locations and requiring small bandwidth. Contrast this with cellular data, where as we increase data speeds we’ll expect the network to deliver large amounts of bursty bandwidth to mobile devices by combining multiple channels of frequency and even signals from multiple cell sites. Demands for self-driving cars or gaming will expect large and steady bandwidth with extremely low latency. These examples are some of the primary uses for a future cell site, but there are dozens of other kinds of connections that will be needed.

The ability to design a quick response to diverse network needs is made more difficult by the fact that every market for every cellular carrier uses a different combination of spectrum blocks and different channels within the blocks. This makes it impossible to design a ‘standard’ network strategy that will work everywhere. To be effective a cellular network must combine the spectrum components available in a given network to create a homogeneous network.

Landline networks are able to handle diverse types of demands using a combination of quality of service (QoS) and techniques like virtual private networks (VPNs). QoS uses a feature called differentiated services to classify and manage different types of IP traffic like streaming video, VoIP, web surfing, etc. Many networks also then use VPN functions like IP tunneling to isolate data paths aimed at specific customers.

These same techniques are hard to apply on a cellular network. Cellular systems need a sophisticated networking solution because the network is limited at any given time by the number of channels of frequency that are not being used. We don’t worry about this on landline networks because we can flood the network with enough bandwidth to accommodate every request. To most effectively use the available bandwidth a cellular network must quickly recognize the exact nature of the bandwidth being demanded and then cobble together the most efficient use of available spectrum. Current QoS solutions can’t adequately distinguish between different types of traffic to the degree needed to make this determination.

Network slicing provides a new way to partition the spectrum on a network. In layman’s terms it performs several functions that differ from QoS. Network slicing can quickly determine the nature of a bandwidth demand. It can then create a wide range of network responses.

One of the features of network slicing is that the network can be pre-configured for different uses. For example, a portion of the network can be isolated and assigned to a single function like IoT. Even more important, new revenues can be generated by partitioning and isolating a part of the network for a single customer – a business within range of a small cell site can be sold a share of the capacity of the cell site to guarantee better service. Slicing could also segregate traffic better – for instance, a cellular carrier could isolate traffic from one of it’s MVNO partners from other traffic on the cell site.

Network slicing can also subdivide spectrum. It allows the cell site to use a portion of a channel for a given connection rather than the whole channel. Slicing off small amounts of spectrum for small bandwidth needs if far more efficient than how cell sites operate today.

Finally, network slicing introduces a lot of new data features not available with QoS. The network can customize the way it handles any particular data stream in terms of data priority, encryption, data storage, etc. The network can more easily give priority to things like law enforcement connections, or IoT signals from critical devices.

Massive MIMO

One of the technologies that will bolster 5G cellular is the use of massive MIMO (multiple-input, multiple-output) antenna arrays. Massive MIMO is an extension of smaller MIMO antennas that have been use for several years. For example, home WiFi routers now routinely use multiple antennas to allow for easier connections to multiple devices. Basic forms of the MIMO technology have been deployed in LTE cell sites for several years.

Massive MIMO differs from current technology by the use of big arrays of antennas. For example, Sprint, along with Nokia demonstrated a massive MIMO transmitter in 2017 that used 128 antennas, with 64 for receive and 64 for transmit. Sprint is in the process of deploying a much smaller array in cell sites using the 2.5 GHz spectrum.

Massive MIMO can be used in two different ways. First, multiple transmitter antennas can be focused together to reach a single customer (who also needs to have multiple receivers) to increase throughput. In the Sprint trial mentioned above Sprint and Nokia were able to achieve a 300 Mbps connection to a beefed-up cellphone. That’s a lot more bandwidth than can be achieved from one transmitter, which at the most could deliver whatever bandwidth is possible on the channel of spectrum being used.

The extra bandwidth is achieved in two ways. First, using multiple transmitters means that multiple channels of the same frequency can be sent simultaneously to the same receiving device. Both the transmitter and receiver must have the sophisticated and powerful computing power to coordinate and combine the multiple signals.

The bandwidth is also boosted by what’s called precoding or beamforming. This technology coordinates the signals from multiple transmitters to maximize the received signal gain and to reduce what is called the multipath fading effect. In simple terms the beamforming technology sets the power level and gain for each separate antenna to maximize the data throughput. Every frequency and its channel operates a little differently and beamforming favors the channels and frequency with the best operating capabilities in a given environment. Beamforming also allows for the cellular signal to be concentrated in a portion of the receiving area – to create a ‘beam’. This is not the same kind of highly concentrated beam that is used in microwave transmitters, but the concentration of the radio signals into the general area of the customer means a more efficient delivery of data packets.

The cellular companies, though, are focused on the second use of MIMO – the ability to connect to more devices simultaneously. One of the key parameters of the 5G cellular specifications is the ability of a cell site to make up to 100,000 simultaneous connections. The carriers envision 5G is the platform for the Internet of Things and want to use cellular bandwidth to connect to the many sensors envisioned in our near-future world. This first generation of massive MIMO won’t bump cell sites to 100,000 connections, but it’s a first step at increasing the number of connections.

Massive MIMO is also going to facilitate the coordination of signals from multiple cell sites. Today’s cellular networks are based upon a roaming architecture. That means that a cellphone or any other device that wants a cellular connection will grab the strongest available cellular signal. That’s normally the closest cell site but could be a more distant one if the nearest site is busy. With roaming a cellular connection is handed from one cell site to the next for a customer that is moving through cellular coverage areas.

One of the key aspects of 5G is that it will allow multiple cell sites to connect to a single customer when necessary. That might mean combining the signal from a MIMO antenna in two neighboring cell sites. In most places today this is not particularly useful since cell sites today tend to be fairly far apart. But as we migrate to smaller cells the chances of a customer being in range of multiple cell sites increases. The combining of cell sites could be useful when a customer wants a big burst of data, and coordinating the MIMO signals between neighboring cell sites can temporarily give a customer the extra needed bandwidth. That kind of coordination will require sophisticated operating systems at cell sites and is certainly an area that the cellular manufacturers are now working on in their labs.

The Continued Growth of Data Traffic

Every one of my clients continues to see explosive growth of data traffic on their broadband networks. For several years I’ve been citing a statistic used for many years by Cisco that says that household use of data has doubled every three years since 1980. In Cisco’s last Visual Networking Index published in 2017 the company predicted a slight slowdown in data growth to now double about every 3.5 years.

I searched the web for other predictions of data growth and found a report published by Seagate, also in 2017, titled Data Age 2025: The Evolution of Data to Life-Critical. This report was authored for Seagate by the consulting firm IDC.

The IDC report predicts that annual worldwide web data will grow from the 16 zettabytes of data used in 2016 to 163 zettabytes in 2025 – a tenfold increase in nine years. A zettabyte is a mind-numbingly large number that equals a trillion gigabytes. That increase means an annual compounded growth rate of 29.5%, which more than doubles web traffic every three years.

The most recent burst of overall data growth has come from the migration of video online. IDC expects online video to keep growing rapidly, but also foresees a number of other web uses that are going to increase data traffic by 2025. These include:

  • The continued evolution of data from business background to “life-critical”. IDC predicts that as much as 20% of all future data will become life-critical, meaning it will directly impact our daily lives, with nearly half of that data being hypercritical. As an example, they mention the example of how a computer crash today might cause us to lose a spreadsheet, but that data used to communicate with a self-driving car must be delivered accurately. They believe that the software needed to ensure such accuracy will vastly increase the volume of traffic on the web.
  • The proliferation of embedded systems and the IoT. Today most IoT devices generate tiny amounts of data. The big growth in IoT data will not come directly from the IoT devices and sensors in the world, but from the background systems that interpret this data and make it instantly usable.
  • The increasing use of mobile and real-time data. Again, using the self-driving car as an example, IDC predicts that more than 25% of data will be required in real-time, and the systems necessary to deliver real-time data will explode usage on networks.
  • Data usage from cognitive computing and artificial intelligence systems. IDC predicts that data generated by cognitive systems – machine learning, natural language processing and artificial intelligence – will generate more than 5 zettabytes by 2025.
  • Security systems. As we have more critical data being transmitted, the security systems needed to protect the data will generate big volumes of additional web traffic.

Interestingly, this predicted growth all comes from machine-to-machine communications that are a result of us moving more daily functions onto the web. Computers will be working in the background exchanging and interpreting data to support activities such as traveling in a self-driving car or chatting with somebody in another country using a real-time interpreter. We are already seeing the beginning stages of numerous technologies that will require big real time data.

Data growth of this magnitude is going to require our data networks to grow in capacity. I don’t know of any client network that is ready to handle a ten-fold increase in data traffic, and carriers will have to beef up backbone networks significantly over time. I have often seen clients invest in new backbone electronics that they hoped to be good for a decade, only to find the upgraded networks swamped within only a few years. It’s hard for network engineers and CEOs to fully grasp the impact of continued rapid data growth on our networks and it’s more common than not to underestimate future traffic growth.

This kind of data growth will also increase the pressure for faster end-user data speeds and more robust last-mile networks. If a rural 10 Mbps DSL line feels slow today, imagine how slow that will feel when urban connections are far faster than today. If the trends IDC foresees hold true, by 2025 there will be many homes needing and using gigabit connections. It’s common, even in the industry to scoff at the usefulness of residential gigabit connections, but when our use of data needs keeps doubling it’s inevitable that we will need gigabit speeds and beyond.

Optical Loss on Fiber

One issue that isn’t much understood except by engineers and fiber technicians is optical loss on fiber. While fiber is an incredibly efficient media for transmitting signals there are still factors that cause the signal to degrade. In new fiber routes these factors are usually minor, but over time problems with fiber accumulate. We’re now seeing some of the long-haul fibers from the 1980s go bad due to accumulated optical signal losses.

Optical signal loss is described as attenuation. Attenuation is a reduction in the power and clarity of a light signal that diminishes the ability of a receiving laser to demodulate the data being received. Any factor that degrades the optical signal is said to increase the attenuation.

Engineers describe several kinds of phenomenon that can degrade a fiber signal:

  • Chromatic Dispersion. This is the phenomenon where a signal gets distorted over distance as the different frequencies of light travel at different speeds. Lasers don’t generally create only one light frequency, but a range of slightly different colors, and different colors of light travel through the fiber at slightly different speeds. This is one of the primary factors that limits the distance that a fiber signal can be sent without needing to pass through a repeater to restart and synchronize all of the separate light paths. More expensive lasers can generate purer light signals and can transmit further. These better lasers are used on long haul fiber routes that might go 60 miles between repeaters while FTTH networks aren’t recommended to travel more than 10 miles.
  • Modal Dispersion. Some fibers are designed to have slightly different paths for the light signal and are called multimode fibers. A fiber system can transmit different date paths through the separate modes. A good analogy for the modes is to think of them as separate tubes inside of a conduit. But these are not physically separated paths and the modes are created by having different parts of the fiber strand to be made of a slightly different glass material. Modal dispersion comes from the light traveling at slightly different speeds through the different modes.
  • Insertion Loss. This is loss of signal that happens when the light signal moves from one media to another. Insertion losses occurs at splice points, where fiber passes through a connector, or when the signal is regenerated through a repeater or other device sitting in the fiber path.
  • Return Loss. This is the lost of signal due to interference caused when some parts of the light are reflected backwards in the fiber. While the glass used in fiber is clear, it’s never perfect and some photons are reflected backwards and interfere with oncoming light signals.

Fiber signal loss can be measured with test equipment that measure the delay in a fiber signal compared to an ideal signal. The losses are expressed in decibels (dB).  New fiber networks are designed with a low total dB loss so that there is headroom over time to accommodate natural damage and degradation. Engineers are able to calculate the amount of loss that can be expected for a signal traveling through a fiber network – called a loss budget. For example, they know that a fiber signal will degrade some specific amount, say 1 dB just from passing through a certain type of fiber. They might expect a loss of 0.3 dB for each splice along a fiber and 0.75 dB when a fiber passes through a connector.

The biggest signal losses on fiber generally come at the end of a fiber path at the customer premise. Flaws like bends or crimps in the fiber might increase return loss. Going through multiple splices increases the insertion loss. Good installation practices are by far the most important factor in minimizing attenuation and providing for a longer life for a given fiber path.

Network engineers also understand that over time that fibers degrade, Fibers might get cut and have to be re-spliced. Connectors get loose and don’t make perfect light connections. Fiber can expand and shrink from temperature extremes and create more reflection. Tiny manufacturing flaws like microscopic cracks will grow over time and create opacity and disperse the light signal.

This is not all bad news and modern fiber electronics allow for a fairly high level of dB loss before the fiber loses functionality. A fiber installed properly, using quality connections and with good splices can last a long time.