Why 5G Won’t Be Here Tomorrow

I just saw another article yesterday written by a major-city newspaper telling the public that 5G is coming in 2020. I hate to see reporters who have accepted the nonsense being peddled by the carriers without digging a little deeper to find the truth. At some point in the near future, the public will finally realize that the 5G talk has mostly been hype.

I don’t mean to always sound like a 5G critic because over time 5G will vastly improve the cellular experience. However, many of the improvements being suggested by the cellular companies – like gigabit cellular service – may never happen. Of more immediacy is the fact that there won’t be any major improvements to cellular networks from 5G for at least 3 – 5 years. The carriers have the country and politicians fully convinced that 5G is right around the corner – but it’s not.

There was a recent article written by Sue Marek in FierceWireless that is a great example of why 5G is not going to be here tomorrow. Titled Network Slicing is a Security Nightmare for Operators, Marek explains how complicated it’s going to be to implement network slicing – perhaps the most important new aspect of 5G cellular service.

Network slicing is the ability of the cellular network to size the transmission path to exactly meet a customer’s bandwidth needs. Network slicing is one of the ways that will enable a cell site to communicate with many more customers at the same time. Today, every customer gets the same-sized data channel, meaning a lot of bandwidth is wasted when customers use less than a full channel.

Marek points out the difficult technical challenge for providing security for every slice of bandwidth. She says that getting this right is going to take two to three years. Until network slicing is viable there really is nothing that can be called 5G. The important takeaway from her article is how difficult it is to implement new technology. 5G is a drastic change from 4G in many ways. There are thirteen major changes in the 5G specification compared to 4G and implementing each of them will be a technical challenge.

What is annoying about the 5G marketing hype is that we’ve always known it would take up to a decade to fully implement 5G, just as it did to implement 4G. The cellular companies can’t seem to help themselves from overhyping new technology, but the 5G hype is many times worse than the 4G hype a decade ago. This mostly seems due to the fact that the cellular carriers decided to use the 5G hype as a way to cram through regulatory changes they’ve wanted for a long time. That forced them to really crank up the 5G rhetoric.

5G will take the same path used by all other electronic technologies – there is a tried-and-true method of introducing upgrades. New breakthroughs start in a lab. They then go to a ‘breadboard’ process where working models are developed. Once the breadboards have been thoroughly tested they go into prototype chips, which are then retested to make sure the performance made it through the conversion to silicone. Finally, the chip design is approved and the new breakthrough goes into production. At the very fastest this process might be done in 12 – 18 months, although this can take as long as three years. Breaking in new changes in the cellular world is doubly complicated because these same changes also have to be introduced into cellphone handsets.

The likely progression we’ll see for 5G is that some new aspect of the 5G specification will make it annually into chipsets. As that happens, only the newest phones will be able to use the upgrades, while earlier versions of 5G phones won’t recognize the new breakthroughs. The idea that the handset manufacturers are introducing 5G handsets in 2020 is laughable because practically none of the important 5G upgrades are yet in chip production. Those handsets will be 5G in name only (and still priced ridiculously high).

Marek is pointing out the complexity of getting 5G security right. There are dozens of other equally difficult technical challenges needed to fully realize 5G, and there are scientists in labs working on all of them. The labs will plow through all of this over time, and long after the hype is far in the past, we’ll get 5G phones that implement most of the 5G specification. It’s worth noting that there never may be a phone that meets the entire specification – because the specifications for a new technology are a wish list. It may turn out that some parts of the specification may never practically work in the field.

Talk to Me – Voice Computing

Technologists predict that one of the most consequential changes in our daily lives will soon come from being able to converse with computers. We are starting to see the early stages of this today as many of us now have personal assistants in our homes such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana or Google’s Personal Assistant. In the foreseeable future, we’ll be able to talk to computers in the same way we talk to each other, and that will usher in perhaps the most important change ever of the way that humans interact with technology.

In the book Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think the author James Vlahos looks at the history of voice computing and also predicts how voice computing will change our lives in the future. This is a well-written book that explains the underlying technologies in an understandable way. I found this to be a great introduction to the technology behind computer speech, an area I knew little about.

One of the first things made clear in the book is the difficulty of the technical challenge of conversing with computers. There are four distinct technologies involved in conversing with a computer. First is Automatic Speech Recognition (ASR) where human speech is converted into digitized ‘words’. Natural Language Understanding (NLU) is the process used by a computer to interpret the meaning of the digitized words. Natural Language Generation (NGR) is how the computer formulates the way it will respond to a human request. Finally, Speech Synthesis is how the computer converts its answer into audible words.

There is much progress being made with each of these areas. For example, the ASR developers are training computers on how humans talk using machine learning and huge libraries of actual human speech and human interactions from social media sites. They are seeing progress as computers learn the many nuances of the ways that humans communicate. In our science fiction we’ve usually portrayed future computers that talk woodenly like Hal from 2001: A Space Odyssey. It looks like our future instead will be personal assistants that speak to each of us using our own slang, idioms, and speaking style, and in realistic sounding voices of our choosing. The goal for the industry is to make computer speech indistinguishable from human speech.

The book also includes some interesting history of the various voice assistants. One of the most interesting anecdotes is about how Apple blew its early lead in computer speech. Steve Jobs was deeply interested in the development of Siri and told the development team that Apple was going to give the product a high priority. However, Jobs died on the day that Siri was announced to the public and Apple management put the product on the back burner for a long time.

The book dives into some technologies related to computer speech and does so in an understandable way. For instance, the book looks at the current status of Artificial Intelligence and at how computers ‘learn’ and how that research might lead to better voice recognition and synthesis. The book looks at the fascinating attempts to create computer neural networks that mimic the functioning of the human brain.

Probably the most interesting part of the book is the last few chapters that talk about the likely impact of computer speech. When we can converse with computers as if they are people, we’ll no longer need a keyboard or mouse to interface with a computer. At that point, the computer is likely to disappear from our lives and computing will be everywhere in the background. The computing power needed to enable computer speech is going to have to be in the cloud, meaning that we just speak when we want to interface with the cloud.

Changing to voice interface with the cloud also drastically changes our interface with the web. Today most of us use Google or some other search engine when searching for information. While most of us select one of the choices offered on the first or second page of the search results, in the future the company that is providing our voice interface will be making that choice for us. That puts a huge amount of power into the hands of the company providing the voice interface – they essentially get to choreograph our entire web experience. Today the leading candidates to be that voice interface are Google and Amazon, but somebody else might grab the lead. There are ethical issues associated with a choreographed web – the company doing our voice searches is deciding the ‘right’ answer to questions we ask. It will be incredibly challenging for any company to do this without bias, and more likely they will do it to drive profits. Picture Amazon driving all buying decisions to its platform.

The transition to voice computing also drastically changes the business plans of a lot of existing technology companies. Makers of PCs and laptops are likely to lose most of their market. Search engines become obsolete. Social media will change drastically. Web advertising will take a huge hit when we don’t see ads – it’s hard to think users will tolerate listening to many ads as part of the web interface experience.

The book makes it clear that this is not science fiction but is a technology that will be maturing during the next decade. I recently saw a video of teens trying to figure out how to use a rotary dial phone, but it might not be long before kids will grow up without ever having seen a mouse or a QWERTY keyboard. I will admit that a transition to voice is intimidating, and they might have to pry my keyboard from my cold, dead hands.

Is There a Business Case for 5G Cellular?

Readers might think I spent too much time writing about 5G. However, I’m asked about 5G almost every day. Existing ISPs want to know if 5G is a real threat. Potential ISPs want to know if they should pause their business plans until they understand 5G’s impact. Cities want to know what to expect. The cellular companies have made such a huge deal about 5G that they’ve spooked the rest of the industry.

Today I ask perhaps the most fundamental question of all – is there a business case for 5G cellular? I’m not talking about 5G wireless loops to homes – I’m just asking if there is a financial justification for the cellular companies to upgrade their cell sites to 5G?

Before answering that question, it’s good to remember that the cellular companies badly need to implement 5G because their 4G networks are facing a crisis. After years of training customers to be stingy in using cellphone data, they are now encouraging users to stream video. The result of this shift is that total cellular data usage is now doubling every two years. Any network engineer will tell that that is deadly growth, particular for a cellular network. The existing 4G network can’t handle this growth for more than a few more years. While some of this growth can be temporarily mitigated by inserting small cell sites into the network, that doesn’t look like it is more than a band-aid fix if broadband keeps growing at this fast pace. Small cell sites will be overwhelmed almost as quickly as they are built.

The carriers need 5G because it will expand the capacity of each cell site by allowing many more customers to use a cell site simultaneously. By taking advantage of frequency slicing and the ability to mix and match multiple frequencies a 5G cell site will be a huge step-up in efficiency. The cellular carriers have not publicly admitted that they need 5G just to keep their networks running – but they really don’t have a choice.

The question, though, is if there is a new revenue stream to help pay for the 5G upgrades? To be honest, I can’t find any early 5G cellular application that will generate much revenue in the near future. The obvious new revenue source would be to charge a premium price to use 5G data on a cellphone. There might be some people willing to pay extra in the early stages of the 5G roll-out, but as 4G morphs over time into 5G, any willingness to pay more for data will melt away.

I also wonder if customers will really value faster cellular data speeds. First, we aren’t talking about a ton of extra speed. Forget the recent trials of millimeter wave 5G – that’s a gimmick for now that will not be available anywhere other than in dense urban centers. The 5G specification that matters to the real world is the goal for 5G speeds to increase over a decade to 100 Mbps.

Good 4G data speeds today are in the range of 15 Mbps and that is more than enough speed to stream data while performing any functions we want from a cellphone. Faster speeds will not stream video any faster. Over time perhaps our cellphones will be able to create augmented reality universes, but that technology won’t be here for a while. Faster data speeds are vitally important in a home where we run multiple large data streams simultaneously – but a cellphone is, by definition, one device for one user.

The real advantage of 5G is the ability to make large numbers of connections from a single cell site. It’s also hard to see an immediate path to monetize that. I talk to a friend many mornings as he commutes and he always gets disconnected at the Eisenhower bridge on the DC beltway – there are not enough cellular connections there to allow for handoffs between Maryland and Virginia. 5G will finally fix that problem, but I can’t see anybody paying extra to not be cut off on the bridge – they will finally be getting what they’ve always expected.

Eventually 5G will have big potential as the connection for outdoor sensors, IoT devices, smart cars, smart streetlights, etc. There is also likely to eventually be a huge market for wearables that might include fitness monitors, medical monitors, smart glasses, and even smart clothes. However, all of these applications will take time to come to market – there is a bit of chicken and egg in that these technologies will likely never take off until there is universal 5G coverage. There is very little revenue likely in the next few years for outdoor applications – although this might eventually be the primary new source of 5G revenue.

I look back to last fall when Ronan Dunne, an EVP of Verizon Wireless, made his case to investors for the potential for 5G. He included the outdoor sensors I mention above. He also cited applications like retail, where holograms might spring up near merchandise in stores. He talked about stock trading that takes advantage of the low latency on 5G. He mentioned gaming, which would benefit from lower latency. Most of these applications offer eventual potential for 5G. But none of these applications are going to produce giant revenues over the next three or four years. In the short run it’s hard to imagine almost any immediate revenue from these applications.

Predicting technology is always a crap shoot and perhaps new applications will arise that need 5G that even Verizon hasn’t imagined. The list of applications that Verizon gave to investors is underwhelming and reflects the fact that there is likely no 5G application that will significantly add to the bottom line of the cellular carriers in the immediate future.

This really brings home the idea that as a nation we are not in a worldwide 5G competition. The carriers need 5G soon to stop the collapse of the 4G data networks in busy neighborhoods. I have a hard time thinking they need it immediately for anything else – although eventually we will be surrounded by 5G applications.

The Impact of Satellite Broadband

Recently I’ve had several people ask me about the expected impact of low-orbit satellite broadband. While significant competition from satellites is probably a number of years away, there are several major initiatives like StarLink (Elon Musk), Project Kuiper (Amazon), and OneWeb that have announced plans to launch swarms of satellites to provide broadband.

At this early stage, it’s nearly impossible to know what impact these companies might have. We don’t know anything about their download and speed capacity, their pricing strategy, or their targeted market so it’s impossible to begin to predict their impact. We don’t even know how long it’s going to take to get these satellites in space since these three companies alone have plans to launch over 10,000 new satellites – a tall task when compared to the 1,100 satellites currently active in space.

Even without knowing any of these key facts, BroadbandNow recently grabbed headlines around the industry by predicting that low-orbit satellites will bring an annual savings of $30 billion for US broadband customers. Being a numbers guy, I never let this kind of headline pass without doing some quick math.

They explain their method of calculation on their web site. They are making several major assumptions about the satellite industry. First, they assume the satellite providers will compete on price and will compete in every market in the country. Since the vast majority of American live in metro areas, BroadbandNow is assuming the satellite providers will become a major competitor in every city. They also assume that the satellites will be able to connect to a huge number of customers in the US which will force other ISPs to lower prices.

Those assumptions would have to be true to support the $30 billion in projected annual consumer savings. That is an extraordinary number and works out to be a savings of almost $20 per month for every household in the US. If you spread the $30 billion over only those households that buy broadband today, that would be a savings of over $23 per month. If your further factor out the folks who live in large apartments and don’t get a choice of their ISP, the savings jumps to $27 per household per month. The only way to realize savings of that magnitude would be from a no-holds-barred broadband price war where the satellite providers are chewing into market penetrations everywhere.

I foresee a different future for the satellite industry. Let’s start with a few facts we know. While 10,000 satellites is an impressive number, that’s a worldwide number and there will be fewer than 1,000 satellites over the US. Most of the satellites are tiny – these are not the same as the huge satellites launched by HughesNet. Starlink has described their satellites as varying in size between a football and a small dorm refrigerator. At those small sizes these satellites are probably the electronic equivalent of the OLT cabinets used as neighborhood nodes in a FTTH network – each satellite will likely support some limited and defined number of customers. OneWeb recently told the FCC in a spectrum docket that they are envisioning needing one million radio links, meaning their US satellites would be able to serve one million households. Let’s say that all of the satellite providers together will serve 3 – 5 million homes in the US – that’s an impressive number, but it’s not going to drive other ISPs into a pricing panic.

I also guess that the satellite providers will not offer cheap prices – they don’t need to. In fact, I expect them to charge more than urban ISPs. The satellite providers will have one huge market advantage – the ability to bring broadband where there isn’t landline competition. The satellite providers can likely use all of their capacity selling only in rural America at a premium price.

We still have no real idea about the speeds that will be available with low-orbit satellite broadband. We can ignore Elon Musk who claims he’ll be offering gigabit speeds. The engineering specs show that a satellite can probably make a gigabit connection, but each satellite is an ISP hub and will have a limited bandwidth capacity. Like with any ISP network, the operator can use that capacity to make a few connections at a high bandwidth speed or many more connections at slower speeds. Engineering common sense would predict against using the limited satellite bandwidth to sell gigabit residential products.

That doesn’t mean the satellite providers won’t be lured by big bandwidth customers. They might make more money selling gigabit links at a premium price to small cell sites and ignoring the residential market completely. It’s a much easier business plan, with drastically lower operating costs to sell their capacity to a handful of big cellular companies instead of selling to millions of households. That is going to be a really tempting market alternative.

I could be wrong and maybe the satellite guys will find a way to sell many tens of millions of residential links and compete in every market, in which case they would have an impact on urban broadband prices. But unless the satellites have the capacity to sell to almost everybody, and unless they decide to compete on price, I still can’t see a way to ever see a $30 billion national savings. I instead see them making good margins by selling where there’s no competition.

Clearing Mid-range Spectrum

The FCC is in the process of trying to free up mid-range spectrum for 5G. They just opened a Notice of Proposed Rulemaking looking at 2.5 GHz spectrum, located in the contiguous block between 2495 and 2690 MHz. Overall this is the largest contiguous block of mid-range spectrum. Over half of the spectrum sits idle today, particularly in rural America. The history of this spectrum demonstrates the complications involved in trying to reposition spectrum for broadband and cellular use.

The frequency was first assigned by the FCC in 1963 when it was made available to school systems to transmit educational TV between multiple schools. The spectrum band was called Instructional Television Fixed Service (ITFS). The band was divided into twenty channels and could transmit a TV signal up to about 35 miles. I grew up in a school system that used the technology and from elementary school onward we had a number of classes taught on the TV. Implementing the technology was expensive and much of the spectrum was never claimed.

In 1972 the FCC recognized the underuse of the spectrum and allowed commercial operators to use the bands of 2150 to 2162 MHz on an unlicensed basis for pay-TV transmissions to rooftop antennas. The spectrum could only carry a few TV channels and in the 1970s was used in many markets to transmit the early version of HBO and Nickelodeon. This spectrum band was known as Multipoint Distribution Service (MDS) and also was good for about 35 miles.

Reacting to pressure from cellular companies, the FCC reallocated eight additional channels of the spectrum for commercial use. Added to the MDS spectrum this became known as Multichannel Multipoint Distribution Service (MMDS). At the time this displaced a few school systems and anybody using the spectrum had to pay to move a school system to another workable channel. This spectrum was granted upon request to operators for specific markets.

In 1991 the FCC changed the rules for MMDS and allowed the channels to be used to transmit commercial TV signals. In 1995 any unused MMDS spectrum was sold under one of the first FCC auctions, which was the first to divide service areas into the geographic areas known as Basic Trading Areas (or BTAs) that are still used today. Before this auction, the spectrum was awarded in 35-mile circles called Geographic Service Areas (GSAs). The existing GSAs were left in place and the spectrum sold at auction had to work around existing GSAs.

The FCC started getting pressure from wireless companies to allow for the two-way transmission of data in the frequency (up to now it had been all one-way delivery to a customer site). In 2005 the FCC changed the rules and renamed the block of spectrum as Broadband Radio Service (BRS). This added significant value to licenses since the spectrum could now be repositioned for cellular usage.

At this point, Clearwire entered the picture and saw the value of the spectrum. They offered to buy or lease the spectrum from school systems at prices far lower than market value and were able to amass the right to use a huge amount of the spectrum nationwide. Clearwire never activated much of the spectrum and was in danger of losing the rights to use it. In 2013 Sprint purchased Clearwire, and Sprint is the only cellular company using the spectrum band today.

Today the spectrum band has all sorts of users. There are still school districts using the spectrum to transmit cable TV. There are still license holders who never stopped using the 35-mile GSA areas. There are still MMDS license holders who found a commercial use for the spectrum. And Sprint holds much of the spectrum not held by these other parties.

The FCC is wrestling in the NPRM with how to undo the history of the spectrum to make it more valuable to the industry. Education advocates are still hoping to play in the space since much of the spectrum sits idle in rural America (as is true with a lot of cellular and other mid-range spectrum). The other cellular carriers would like to see chunks of the spectrum sold at auction. Other existing license holders are fighting to extract the biggest value out of any change of control of the spectrum.

The challenge for repositioning this spectrum is complicated because the deployment of the spectrum differs widely today by market. The FCC is struggling to find an easy set of rules to put the genie back in the bottle and start over again. In terms of value for 5G, this spectrum sits in a sweet spot in terms of coverage characteristics. Using the spectrum for cellular data is probably the best use of the spectrum, but the FCC has to step carefully to do this in such a way as to not end up in court cases for years disputing any order. Reallocating spectrum is probably the most difficult thing the FCC does and it’s not hard to see why when you look at the history of this particular block of spectrum and realize that every block of spectrum has a similar messy past.

AT&T and Augmented Reality

Lately it seems like I find a news article almost every week talking about new ways that people are using broadband. The latest news is an announcement that AT&T is selling Magic Leap augmented reality headsets in six cities plus online.

The AT&T launch is being coordinated with the release of an augmented reality immersive experience that will bring The Game of Thrones into people’s homes with a themed gaming experience called The Dead Must Die, with a teaser in this trailer.

Augmented reality differs from virtual reality in that augmented reality overlays images into the local environment. A user will see characters in their living room as opposed to being immersed in a total imaginary environment with virtual reality.

Magic Leap is one of the most interesting tech start-ups. They started in 2014 with a $542 million investment, and since then have raised over $2.3 billion dollars. The company’s investors and advisors include people like Alibaba executive vice chair Joe Tsai and director Steven Spielberg. There have been rumors over the years of an impending product, but until now they’ve never brought a product to market. AT&T will be selling Magic Leap’s first headset, called the Magic Leap One Creator Edition for a price of $2,295. The mass-market headset will surely cost a lot less.

AT&T’s interest in the technology extends past selling the headsets. Magic Leap recently signed a deal with the NBA and its broadcast partner Turner which is now owned by AT&T and will obviously be looking at augmented reality broadcasts of basketball games.

AT&T’s interest goes even far beyond that and they are looking at the Magic Leap technology as the entry into the spatial Internet – moving today’s web experience to three dimensions. AT&T sees the Magic Leap headset as the entry into bringing virtual reality to industries like healthcare, retail and manufacturing. They envision people shopping in 3D, doctors getting 3D computer assistance for visualizing a patient during an operating, and manufacturer workers aided by overlaid 3D blueprints on the manufacturing floor.

While the Magic Leap headset will work on WiFi today, AT&T is promoting Magic Leap as part of their 5G Innovation Program. AT&T is touting this as a technology that will benefit greatly from 5G, which will allow users to go mobile and use the augmented reality technology anywhere.

I couldn’t find any references to the amount of bandwidth used by this first-generation headset, but it has to be significant. Looking at the Game of Thrones application, a user is immersed in a 3D environment and can move and interact with elements in the augmented reality. That means a constant transmission of the elements in the 3D environment. I have to think that is at least equivalent to several simultaneous video transmissions. Regardless of the bandwidth used today, you can bet that as augmented reality becomes mainstream that content makers will find ways to use greater bandwidth.

We are already facing a big increase in bandwidth that is needed to support gaming from the cloud – as is now being pushed by the major game vendors. Layering augmented reality on top of that big data stream will increase bandwidth needs by another major increment.

Broadband and Food Safety

I recently saw a presentation that showed how food safety is starting to rely on good rural broadband. I’ve already witnessed many other ways that farmers use broadband like precision farming, herd monitoring, and drone surveillance, but food safety was a new concept for me.

The presentation centered around the romaine lettuce scare of a few months ago. The food industry was unable to quickly identify the source of the contaminated produce and the result was a recall of all romaine nationwide. It turns out the problem came from one farm in California with E. Coli contamination, bur farmers everywhere paid a steep price as all romaine was yanked from store shelves and restaurants, also resulting in cancellations of upcoming orders.

Parts of the food industry have already implemented the needed solution. You might have noticed that the meat industry is usually able to identify the source of problems relatively quickly and can ususally track problems back to an individual rancher or packing house. Cattle farmer are probably the most advanced at tracking the history of herd animals, but all meat producers track products to some extent.

The ideal solution to the romaine lettuce problem is to document every step of the farming process and to make that information available to retailers and eventually to consumers. In the case of romaine that might mean tracking and recording the basic facts of each crop at each farm. That would mean recording the strain of seeds used. It would mean logging the kinds of fertilizer and insecticide applied to a given field. It would mean recording the date when the romaine was picked. The packing and shipping process would then be tracked so that everything from the tracking number on the box or crate, and the dates and identity of every immediate shipper between farm to grocery store would be recorded.

Inititally this would be used to avoid the large blanket recalls like happened with romaine. Ultimately, this kind of information could be made available to consumers. We could wave our smartphone at produce and find out where it was grown, when it was picked and how long it’s been sitting in the store. There are a whole lot of steps that have to happen before the industry can reach that ultimate goal.

The process needs to start with rural broadband. The farmer needs to be able to log the needed information in the field. The day may come when robots can automatically log everything about the growing process, and that will require even more intensive and powerful broadband. The farmer today needs an easy data entry system that allows data to be scanned into the cloud as they work during the growing, harvesting, and packing process.

There also needs to be some sort of federal standards so that every farmer is collecting the same data, and in a format that can be used by every grocery store and restaurant. There is certainly a big opportunity for any company that can develop the scanners and the software involved in such a system.

In many places this can probably be handled with robust cellular data service that extends into the fields. However, there is a lot of rural America that doesn’t have decent, or even any cell service out in the fields. Any farm tracking data is also going to need adequate broadband to upload data into the cloud. Farms with good broadband are going to have a big advantage over those without. We already know this is true today for cattle and dairy farming where detailed records are kept on each animal. I’ve talked to farmers who have to drive every day to find a place to upload their data into the cloud.

In the many counties where I work today the farmers are among those leading the charge for better broadband. If selling produce or animals requires broadband we are going to see farmers move from impatience to insistence when lack of connectivity means loss of profits.

I know as a consumer that I would feel better knowing more about the produce I buy. I’d love to buy more produce that was grown locally or regionally, but it’s often nearly impossible to identify in the store. I’d feel a lot safer knowing that the batch of food I’m buying has been tracked and certified as safe. Just in the last year there’s been recalls on things like romaine, avocados, spring onions, and packaged greens mixes. I don’t understand why any politician that serves a farming district is not screaming loudly for a national solution for rural broadband.

Millimeter Wave Cellular Service

Verizon is claiming to have the first real-world deployment of fast 5G cellular service. They launched an early version of what they are calling 5G in downtown Chicago and Minneapolis. This launch involves the deployment of millimeter wave spectrum.

A review of the cellular performance in FierceWireless showed exactly what was to be expected. This new service will only be available from a few cell sites in each city. For now the service can only be received using a Motorola Z3 handset that has been modified with a 5G Moto Mod adapter.

As would be expected, the millimeter wave broadband was fast, with peak speed measured at 500 Mbps. But also as expected, the coverage area is small, and millimeter wave spectrum is easily blocked by almost any impediment. Walking inside a building or around the corner of a building killed the broadband signal. The signal speed cut in half when received through a window. When not in the range of the millimeter wave signal the phone reverts to 4G, because Verizon is not yet close to implementing any actual 5G standards. This was not a trial of 5G technology – it’s a trial that shows that millimeter wave spectrum can carry a lot of data. That is especially easy to demonstrate when there are only one or two users on a given cell site.

Verizon announced a fee of $10 per month for the faster data speed, but almost immediately said the fee will be waived. This launch is another marketing gimmick letting Verizon get headlines proclaiming 500 Mbps cellular data speeds. The reviewer noted that the Verizon store in downtown Chicago was not ready to provide the product to anybody.

There are big issues with using millimeter wave spectrum for cellular service. I first ask what a cellphone user can do with that kind of speed. A cellphone can already be used to stream a video on a decent 4G connection. Other than software updates there isn’t any real need to download big files on a cellphone. It’s unlikely that the cellular carriers are going to let you tether speeds of that magnitude to a computer.

The other big issues will be the real-life limitations of millimeter wave spectrum outdoors. Since the frequency won’t pass through walls, this is strictly going to be an outdoor walking technology. As the FierceWireless review showed, it’s extremely easy to walk out of coverage. A cellular carrier will need to provide multiple cell sites in very close proximity in order to cover a given area.

It’s hard to think that there will ever be many subscribers willing to pay $10 more per month for a product with these limitations. How many people care about getting faster data speed outside, and only in areas of a city that are close to 5G transmitters? Would many cellular customers pay more so that they could save a few minutes per month to download software updates?

It’s hard to envision that the incremental revenues from customers will ever justify the cost of deploying multiple cell sites within close proximity of each other. T-Mobile already announced that they don’t plan to charge extra for 5G data when it’s available – there is no incentive to offer the product if there is no additional revenue.

What I found interesting is that Verizon also announced that they will be launching this same product in 20 additional urban markets soon, with 30 markets by the end of the year. The company will be using this launch to promote the new Galaxy S10 5G phone that will be able to utilize the millimeter wave spectrum. Verizon is touting the new service by saying that it will provide access to faster streaming, augmented-reality, gaming, and consumer and business applications.

If anything, this launch is a gimmick to sell more of the expensive 5G handsets. I wonder how many people will buy this phone hoping for faster service, only to realize that they have to stand outside close to a downtown millimeter wave cell site to use it. How many people want to go outside to enjoy faster gaming or augmented reality?

This is not to say that millimeter wave spectrum doesn’t have value, but that value will manifest when Verizon or somebody offers an indoor 5G modem that’s connected to a landline broadband connection. That would enable a cellphone to connect to faster gaming or augmented reality. That has some definite possibilities, but that is not cellular service, but rather an indoor broadband connection using a cellphone as the receiver.

I’m really starting to hate these gimmicks. Verizon and AT&T are both painting a false picture of 5G by making everybody think it will provide gigabit speeds everywhere – something that is not even listed as a goal of the 5G specifications. These gimmicks are pure marketing hype. The companies want to demonstrate that they are cutting edge. The gimmicks are aimed even more for politicians who the carriers are courting to support deregulation of broadband in the name of 5G. In the cease of this particular gimmick, Verizon might sell more Samsung 5G phones. But the gimmicks are just gimmicks and this trial is not a real product.

The Fourth Industrial Revolution

There is a lot of talk around the world among academics and futurists that we have now entered into the beginnings of the fourth industrial revolution. The term industrial revolution is defined as a rapid change in the economy due to technology.

The first industrial revolution came from steam power that drove the creation of the first large factories to create textiles and other goods. The second industrial revolution is called the age of science and mass production and was powered by the simultaneous development of electricity and oil-powered combustion engines. The third industrial revolution was fairly recent and was the rise of digital technology and computers.

There are differing ideas of what the fourth industrial revolution means, but every prediction involves using big data and emerging technologies to transform manufacturing and the workplace. The fourth industrial revolution means mastering and integrating an array of new technologies including artificial intelligence, machine learning, robotics, IoT, nanotechnology, biotechnology, and quantum computing. Some technologists are already predicting that the shorthand description for this will be the age of robotics.

Each of these new technologies is in their infancy but all are progressing rapidly. Take the most esoteric technology on the list – quantum computing. As recently as three or four years ago this was mostly an academic concept and we now have first generation quantum computers. I can’t recall where I read it, but I remember a quote that said that if we think of the fourth industrial revolution in terms of a 1,000-day process that we are now only on day three.

The real power of the fourth industrial revolution will come from integrating the technologies. The technology that is the most advanced today is robotics, but robotics will change drastically when robots can process huge amounts of data quickly and can use AI and machine learning to learn and cope with the environment in real time. Robotics will be further enhanced in a factory or farm setting by integrating a wide array of sensors to provide feedback from the surrounding environment.

I’m writing about this because all of these technologies will require the real-time transfer of huge amounts of data. Futurists and academics who talk about the fourth industrial revolution seem to assume that the needed telecon technologies already exist – but they don’t exist today and need to be developed in conjunction with the other new technologies.

The first missing element to enable the other technologies are computer chips that can process huge amounts of data in real time. Current chip technology has a built-in choke point where data is queued and fed into and out of a chip for processing. Scientists are exploring a number of ways to move data faster. For example, light-based computing has the promise to move data at speeds up to 50 Gbps. But even that’s not fast enough and there is research being done using lasers to beam data directly into the chip processor – a process that might increase processing speeds 1,000 times over current chips.

The next missing communications element is a broadband technology that can move data fast enough to keep up with the faster chips. While fiber can be blazingly fast, a fiber is far too large to use at the chip level, and so data has to be converted at some point from fiber to some other transmission path.

The amount of data that will have to be passed in some future applications is immense. I’ve already seen academics bemoaning that millimeter wave radios are not fast enough, so 5G will not provide the solution. Earlier this year the first worldwide meeting was held to officially start collaborating on 6G technology using terabit wave spectrum. Transmissions at those super-high frequencies only stay coherent for a few feet, but these frequencies can carry huge amounts of data. It’s likely that 6G will play a big role in providing the bandwidth to the robots and other big data needs of the fourth industrial revolution. From the standpoint of the telecom industry, we’re no longer talking about last-mile and we are starting to address the last-foot!

Are You Ready for 6G?

The first 6G summit convenes this coming weekend in Levi, Lapland, Finland, sponsored by the University of Oulu. The summit will end with a closed-door, invitation-only assembly of wireless researchers and vendors with the goal to create a draft vision statement defining the goals of 6G research. Attendees include all of the major wireless vendors like Huawei, Ericsson, Samsung, and NTT, along with researchers from numerous universities and groups like Nokia Bell Labs.

As you would expect, even as 5G standards were being finalized there were already private and academic research labs working on what will come next. So far, some of the vision for 6G includes concepts like:

  • Use of higher frequencies between 100 GHz and 1 THz, introducing the world to the idea of terahertz spectrum. The upper end of this range lies between radio waves and infrared light. The FCC just approved research above 95 GHz.
  • Researches believe this next generation wireless will be needed to finally enable 3D holograms needed for lifelike telepresence.
  • The higher frequencies would also allow for densification and for the simultaneous transmission of multiple large-bandwidth transmissions. Researchers already believe that with the higher frequencies that the capacity of a wireless network could be as much as 1,000 times that of 5G – but even 10 times faster would be a major breakthrough.
  • Scientists anticipate within a decade that we’ll have advanced far enough with artificial intelligence to enable AI-powered routing that will choose the best path in real time for each packet and will significantly decrease latency.
  • Various researchers from Brown University and universities in Australia have said that 5G will be inadequate to satisfy our future needs for both bandwidth and for the overall number of IoT connections. One of the goals of 6G will be to increase the number of connected devices from a given transmitter by one to two magnitudes.

The higher frequencies will allow for even faster data transmission, as much as 10 times faster than the gigabit speeds envisioned for point-to-multipoint 5G using millimeter wave radios.

There are a number of issues to be overcome with the higher frequencies, the primary being that radio waves at those frequencies won’t pass through any barrier. However, scientists already think there might be strategies for bouncing the waves around obstacles.

The other shortcoming of the frequencies is the short distances before the signal dissipates. This is likely to limit the higher frequencies to indoor use allowing for indoor wireless networks with speeds as fast as 10 Gbps.

Interestingly, researchers in China say that this vision of 6G is the end of the line in terms of major platform upgrades and that there will never be a 7G. After 6G the goal over time will be to improve the performance of the various aspects of the technologies involved. Apparently, the Chinese have never met any AT&T and Verizon marketing staff.

Many of the group researching these topics are already talking about having a 6G set of standards by 2030. But there is a lot of research to be done including fundamental steps like developing chips capable of handling the higher speeds. We also will hit regulatory barriers – governments all regulate the use of radio waves, but it might be harder to regulate the use of the light-like frequencies at the base of the infrared spectrum.