Spectrum and 5G

All of the 5G press has been talking about how 5G is going to be bringing gigabit wireless speeds everywhere. But that is only going to be possible with millimeter wave spectrum, and even then it requires a reasonably short distance between sender and receiver as well as bonding together more than one signal using multiple MIMO antennae.

It’s a shame that we’ve let the wireless marketeers equate 5G with gigabit because that’s what the public is going to expect from every 5G deployment. As I look around the industry I see a lot of other uses for 5G that are going to produce speeds far slower than a gigabit. 5G is a standard that can be applied to any wireless spectrum and which brings some benefits over earlier standards. 5G makes it easier to bond multiple channels together for reaching one customer. It also can increase the number of connections that can be made from any given transmitter – with the biggest promise that the technology will eventually allow connections to large quantities of IOT devices.

Anybody who follows the industry knows about the 5G gigabit trials. Verizon has been loudly touting its gigabit 5G connections using the 28 GHz frequency and plans to launch the product in up to 28 markets this year. They will likely use this as a short-haul fiber replacement to allow them to more quickly add a new customer to a fiber network or to provide a redundant data path to a big data customer. AT&T has been a little less loud about their plans and is going to launch a similar gigabit product using 39 GHz spectrum in three test markets soon.

But there are also a number of announcements for using 5G with other spectrum. For example, T-Mobile has promised to launch 5G nationwide using its 600 MHz spectrum. This is a traditional cellular spectrum that is great for carrying signals for several miles and for going around and through obstacles. T-Mobile has not announced the speeds it hopes to achieve with this spectrum. But the data capacity for 600 MHz is limited and binding numerous signals together for one customer will create something faster then LTE, but not spectacularly so. It will be interesting to see what speeds they can achieve in a busy cellular environment.

Sprint is taking a different approach and is deploying 5G using the 2.5 GHz spectrum. They have been testing the use of massive MIMO antenna that contain 64 transmit and 64 receive channels. This spectrum doesn’t travel far when used for broadcast, so this technology is going to be used best with small cell deployments. The company claims to have achieved speeds as fast as 300 Mbps in trials in Seattle, but that would require binding together a lot of channels, so a commercial deployment is going to be a lot slower in a congested cellular environment.

Outside of the US there seems to be growing consensus to use 3.5 GHz – the Citizens Band radio frequency. That raises the interesting question of which frequencies will end up winning the 5G race. In every new wireless deployment the industry needs to reach an economy of scale in the manufacture of both the radio transmitters and the cellphones or other receivers. Only then can equipment prices drop to the point where a 5G capable phone will be similar in price to a 4GLTE phone. So the industry at some point soon will need to reach a consensus on the frequencies to be used.

In the past we rarely saw a consensus, but rather some manufacturer and wireless company won the race to get customers and dragged the rest of the industry along. This has practical implications for early adapters of 5G. For instance, somebody buying a 600 MHz phone from T-Mobile is only going to be able to use that data function when near to a T-Mobile tower or mini-cell. Until industry consensus is reached, phones that use a unique spectrum are not going to be able to roam on other networks like happens today with LTE.

Even phones that use the same spectrum might not be able to roam on other carriers if they are using the frequency differently. There are now 5G standards, but we know from practical experience with other wireless deployments in the past that true portability between networks often takes a few years as the industry works out bugs. This interoperability might be sped up a bit this time because it looks like Qualcomm has an early lead in the manufacture of 5G chip sets. But there are other chip manufacturers entering the game, so we’ll have to watch this race as well.

The word of warning to buyers of first generation 5G smartphones is that they are going to have issues. For now it’s likely that the MIMO antennae are going to use a lot of power and will drain cellphone batteries quickly. And the ability to reach a 5G data signal is going to be severely limited for a number of years as the cellular providers extend their 5G networks. Unless you live and work in the heart of one of the trial 5G markets it’s likely that these phones will be a bit of a novelty for a while – but will still give a user bragging rights for the ability to get a fast data connection on a cellphone.

Edging Closer to Satellite Broadband

A few weeks ago Elon Musk’s SpaceX launched two test satellites that are the first in a planned low-orbit satellite network that will blanket the earth with broadband. The eventual network, branded as Starlink, will consist of 4,425 satellites deployed at 700 miles above earth and another 7,518 deployed at around 210 miles of altitude.

Getting that many satellites into orbit is a daunting logistical task. To put this into perspective, the nearly 12,000 satellites needed are twice the number of satellites that have been launched in history. It’s going to take a lot of launches to get these into the sky. SpaceX’s workhorse rocket the Falcon 9 can carry about ten satellites at a time. They also have tested a Falcon Heavy system that could carry 20 or so satellites at a time. If they can make a weekly launch of the larger rocket that’s still 596 launches and would take 11.5 years. To put that number into perspective, the US led the world with 29 successful satellite launches last year, with Russia second with 21 and China with 16.

SpaceX is still touting this as a network that can make gigabit connections to customers. I’ve read the FCC filing for the proposed network several times and it looks to me like that kind of speed will require combining signals from multiple satellites to a single customer and I have to wonder if that’s practical when talking about deploying this networks to tens of millions of simultaneous subscribers. It’s likely that their standard bandwidth offering is going to be something significantly less.

There is also a big question to me about the capacity of the backhaul network that carry signal to and from the satellites. It’s going to take some major bandwidth to handle the volume of broadband users that SpaceX has in mind. We are seeing landline long-haul fiber networks today that are stressed and reaching capacity. The satellite network will face the same backhaul problems as everybody else and will have to find ways to cope with a world where broadband demand doubles every 3 years or so. If the satellite backhaul gets clogged or if the satellites get over-subscribed then the quality of broadband will degrade like with any other network.

Interestingly, SpaceX is not the only one chasing this business plan. For instance, billionaire Richard Branson wants to build a similar network that would put 720 low-orbit satellites over North America. Telesat has launched two different test satellites and also want to deploy a large satellite network. Boeing also announced intentions to launch a 1,000-satellite network over North America. It’s sounding like our skies are going to get pretty full!

SpaceX is still predicting that the network is going to cost roughly $10 billion to deploy. There’s been no talk of consumer prices yet, but the company obviously has a business plan – Musk want to use this business as the primary way to fund the colonization of Mars. But pricing is an issue for a number of reasons. The satellites will have some finite capacity for customer connections. In one of the many articles I read I saw the goal for the network is 40 million customers (and I don’t know if that’s the right number, but there is some number of simultaneous connections the network can handle). 40 million customers sounds huge, but with a current worldwide population of over 7.6 billion people it’s miniscule for a worldwide market.

There are those predicting that this will be the salvation for rural broadband. But I think that’s going to depend on pricing. If this is priced affordably then there will be millions in cities who would love to escape the cable company monopoly, and who could overwhelm the satellite network. There is also the issue of local demand. Only a limited number of satellites can see any given slice of geography. The network might easily accommodate everybody in Wyoming or Alaska, but won’t be able to do the same anywhere close to a big city.

Another issue is worldwide pricing. A price that might be right in the US might be ten times higher than what will be affordable in Africa or Asia. So there is bound to be pricing differences based upon regional incomes.

One of the stickier issues will be the reaction of governments that don’t want citizens using the network. There is no way China is going to let citizens bypass the great firewall of China by going through these satellites. Repressive regimes like North Kora will likely make it illegal to use the network. And even democratic countries like India might not like the idea – last year they turned down free Internet from Facebook because it wasn’t an ‘Indian’ solution.

Bottom line is that this is an intriguing idea. If the technology works as promised, and if Musk can find the money and can figure out the logistics to get this launched it’s going to be another new source of broadband. But satellite networks are not going to solve the world’s broadband problems because they are only going to be able to help some small limited percentage of the world’s population. But with that said, a remote farm in the US or a village in Africa is going to love this when it’s available.

5G is Fiber-to-the-Curb

The marketing from the wireless companies has the whole country buzzing with speculation that the whole world is going to go wireless with the introduction of 5G. There is a good chance that within five years that a good and reliable and pole-mounted technology could become the preferred way to go from the curb to homes and businesses. When that happens we will finally have wireless fiber-to-the-curb – something that I’ve heard talked about for at least 25 years.

I remember visiting an engineer in the horse country of northern Virginia in the 1990s who had developed a fiber-to-the-curb wireless technology that could deliver more than 100 Mbps from a pole to a house. His technology was limited in that there had to be one pole-mounted transmitter per customer, and there was a distance limitation of a few hundred feet for the delivery. But he was clearly on the right track and was twenty years ahead of his time. At that time we were all happy with our 1 Mbps DSL and 100 Mbps sounded like science fiction. But I saw his unit functioning at his home, and if he had caught the attention of a big vendor we might have had wireless fiber-to-the-curb a lot sooner than now.

I have to laugh when I read people talking about our wireless future, because it’s clear that this technology is going to require a lot of fiber. There is a lot of legislative and lobbying work going on to make it easier to mount wireless units on poles and streetlights, but I don’t see the same attention being put into making it easier to build fiber – and without fiber this technology is not going to work as promised.

It’s easy to predict that there are going to be a lot of lousy 5G deployments. ISPs are going to come to a town, connect to a single gigabit fiber and then serve the rest of the town from that one connection. This will be the cheap way to deploy this technology and those without capital are going to take this path. The wireless units throughout the town will be fed with wireless backhaul, with many of them on multiple wireless hops from the source. In this kind of network the speeds will be nowhere near the gigabit capacity of the technology, the latency will be high and the network will bog down in the evenings like any over-subscribed network. A 5G network deployed in this manner will not be a killer app that will kill cable networks.

However, a 5G fiber-to-the-curb network built the right way is going to be as powerful as an all-fiber network. That’s going to mean having neighborhood wireless transmitters to serve a limited number of customers, with each transmitter fed by fiber. When Verizon and AT&T talk about the potential for gigabit 5G this is what they are talking about. But they are not this explicit because they are not likely today to deploy networks this densely. The big ISPs still believe that people don’t really need fast broadband. They will market this new technology by stressing that it’s 5G while building networks that will deliver far less than a gigabit.

There are ISPs who will wait for this technology to mature before switching to it, and they will build networks the right way. In a network with fiber everywhere this technology makes huge sense. One of the problems with a FTTH network that doesn’t get talked about a lot is abandoned drops. Fiber ISPs build drops to homes and over time a substantial number of premises no longer use the network for various reasons. I know of some 10-year old networks where as many as 10% of fiber drops have been abandoned as homes that buy service from somebody else. A fiber-to-the-curb network solves this problem by only serving those who have active service.

I also predict that the big ISPs will make every effort to make this a customer-provisioned technology. They will mail customers a receiver kit to save on a truck roll, because saving money is more important to them than quality. This will work for many customers, but others will stick the receiver in the wrong place and never get the speed they might have gotten if the receiver was mounted somewhere else in the home.

There really are no terrible broadband technologies, but there are plenty of terrible deployments. Consider that there are huge number of rural customers being connected to fixed wireless networks. When those networks are deployed properly – meaning customers are not too far from the transmitter and each tower has a fiber feed – the speeds can be great. I know a colleague who is 4-miles from a wireless tower and is getting nearly 70 Mbps download. But there are also a lot of under-capitalized ISPs that are delivering speeds of 5 Mbps or even far less using the same technology. They can’t afford to get fiber to towers and instead use multiple wireless hops to get to neighborhood transmitters. This is a direct analogue of what we’ll see in poorly deployed 5G networks.

I think it’s time that we stop using the term 5G as a shortcut for meaning gigabit networks. 5G is going to vary widely depending upon the frequencies used and will vary even more widely depending on how the ISP builds their network. There will be awesome 5G deployments, but also a lot of so-so and even lousy ones. I know I will be advising my clients on building wireless fiber-to-the-curb – and that means networks that still need a lot of fiber.

Gigabit LTE

Samsung just introduced Gigabit LTE into the newest Galaxy S8 phone. This is a technology with the capability to significantly increase cellular speeds, and which make me wonder if the cellular carriers will really be rushing to implement 5G for cellphones.

Gigabit LTE still operates under the 4G standards and is not an early version of 5G. There are three components of the technology:

  • Each phone has as 4X4 MIMO antenna, which is an array of four tiny antennae. Each antenna can make a separate connection to the cell tower.
  • The network must implement frequency aggregation. Both the phone and the cell tower must be able to combine the signals from the various antennas into one coherent data path.
  • Finally, the new technology utilizes the 256 QAM (Quadrature Amplitude Modulation) protocol which can cram more data into the cellular data path.

The amount of data speeds that can be delivered to a given cellphone under this technology is going to rely on a number of different factors:

  • The nearest cell site to a customer needs to be upgraded to the technology. I would speculate that this new technology will be phased in at the busiest urban cell sites first, then to busy suburban sites and then perhaps to less busy sites. It’s possible that a cellphone could make connections to multiple towers to make this work, but that’s a challenge with 4G technology and is one of the improvements promised with 5G.
  • The amount of data speed that can be delivered is going to vary widely depending upon the frequencies being used by the cellular carrier. If this uses existing cellular data frequencies, then the speed increase will be a combination of the impact of adding four data streams together, plus whatever boost comes from using 256 QAM, less the new overheads introduced during the process of merging the data streams. There is no reason that this technology could not use the higher millimeter wave spectrum, but that spectrum will use different antennae than lower frequencies.
  • The traffic volume at a given cell site is always an issue. Cell sites that are already busy with single antennae connections won’t have the spare connections available to give a cellphone more than one channel. Thus, a given connection could consist of one to four channels at any given time.
  • Until the technology gets polished, I’d have to bet that this will work a lot better with a stationary cellphone rather than one moving in a car. So expect this to work better in downtowns, convention centers, etc.
  • And as always, the strength of a connection to a given customer will vary according to how far a customer is from the cell site, the amount of local interference, the weather and all of those factors that affect radio transmissions.

I talked to a few wireless engineers and they guessed that this technology using existing cellular frequencies might create connections as fast as a few hundred Mbps in ideal conditions. But they could only speculate on the new overheads created by adding together multiple channels of cellular signal. There is no doubt that this will speed up cellular data for a customer in the right conditions, with the right phone near the right cell site. But adding four existing cellular signals together will not get close to a gigabit of speed.

It will be interesting to see how the cellular companies market this upgrade. They could call this gigabit LTE, although the speeds are likely to fall far short of a gigabit. They could also market this as 5G, and my bet is that at least a few of them will. I recall back at the introduction of 4G LTE that some carriers started marketing 3.5G as 4G, well before there were any actual 4G deployments. There has been so much buzz about 5G now for a year that the marketing departments at the cellular companies are going to want to tout that their networks are the fastest.

It’s always an open question about when we are going to hear about this. Cellular companies run a risk in touting a new technology if most bandwidth hungry users can’t yet utilize it. One would think they will want to upgrade some critical mass of cell sites before really pushing this.

It’s also going to be interesting to see how faster cellphone speeds affect the way people use broadband. Today it’s miserable to surf the web on a cellphone. In a city environment most connections are more than 10 Mbps today, but it doesn’t feel that fast because of shortfalls in the cellphone operating systems. Unless those operating systems get faster, there might not be that much noticeable different with a faster connection.

Cellphones today are already capable of streaming a single video stream, although with more bandwidth the streaming will get more reliable and will work under more adverse conditions.

The main impediment to faster cellphones really changing user habits is the data plans of the cellular carriers. Most ‘unlimited’ plans have major restrictions on using a cellphone to tether data for other devices. It’s that tethering that could make cellular data a realistic substitute for a home landline connection. My guess is until we reach a time when there are ubiquitous mini-cell sites spread everywhere that the cellular carriers are not going to let users treat cellular data the same as landline data. Until cellphones are allowed to utilize the broadband available to them, faster cellular data speeds might not have much impact on the way we use our cellphones.

A Hybrid Model for Rural America

Lately I’ve looked at a lot of what I call a hybrid network model for bringing broadband to rural America. The network involves building a fiber backbone to support wireless towers while also deploying fiber to any pockets of homes big enough to justify the outlay. It’s a hybrid between point-to-multipoint wireless and fiber-to-the home.

I’ve never yet seen a business model that shows a feasible model for building rural FTTP without some kind of subsidy. There are multiple small telcos building fiber to farms using some subsidy funding from the A-CAM portion of the Universal Service Fund. And there are state broadband grant programs that are helping to build rural fiber. But otherwise it’s hard to justify building fiber in places where the cost per passing is $10,000 per household or higher.

The wireless technology I’m referring is a point-to-multipoint wireless network using a combination of frequencies including WiFi and 3.65 GHz. The network consists of placing transmitters on towers and beaming signals to dishes at a customer location. In areas without massive vegetation or other impediments this technology can now reliably deliver 25 Mbps download for 6 miles and higher bandwidth closer to the tower.

A hybrid model makes a huge difference in financial performance. I’ve now seen an engineering comparison of the costs of all-fiber and a hybrid network in half a dozen counties and the costs for building a hybrid network are in the range of 20% – 25% of the cost of building fiber to everybody. That cost reductions can result in a business model with a healthy return that creates significant positive cash over time.

There are numerous rural WISPs that are building wireless networks using wireless backhaul rather than fiber to get bandwidth to the towers. That solution might work at first, although I often see new wireless networks of this sort that can’t deliver the 25 Mbps bandwidth to every customer due to backhaul restraints.  It’s guaranteed that the bandwidth demands from customers on any broadband network will eventually grow to be larger than the size of the backbone feeding the network. Generally, over a few years a network using wireless backhaul will bog down at the busy hour while a fiber network can keep up with customer bandwidth demand.

One key component of the hybrid network is to bring fiber directly to customers that live close to the fiber. This means bringing fiber to any small towns or even small pockets of 20 or more homes that are close together. It also means giving fiber to farms and rural customers that happen to live along the fiber routes. Serving some homes with fiber helps to hold down customer density on the wireless portion of the network – which improves wireless performance. Depending on the layout of a rural county, a hybrid model might bring fiber to as much as 1/3 of the households in a county while serving the rest with wireless.

Another benefit of the hybrid model is that it moves fiber deeper into rural areas. This can provide the basis for building more fiber in the future or else upgrading wireless technologies over time for rural customers.

A side benefit of this business plan is that it often involves build a few new towers. Areas that need towers typically already have poor, or nonexistent cellular cover. The new towers can make it easier for the cellular companies to fill in their footprint and get better cellular service to everybody.

One reason the hybrid model can succeed is the high customer penetration rate that comes when building the first real broadband network into a rural area that’s never had it. I’ve now seen the customer numbers from numerous rural broadband builds and I’ve seen customer penetration rates range between 65% and 85%.

Unfortunately, this business plan won’t work everywhere, due to the limitations of wireless technology. It’s much harder to deploy a wireless network of this type in an area with heavy woods or lots of hills. This is a business plan for the open plains of the Midwest and West, and anywhere else with large areas of open farmland.

County governments often ask me how they can get broadband to everybody in their county. In areas where the wireless technology will work, a hybrid model seems like the most promising solution.

Self-driving Cars and Broadband Networks

There are two different visions of the future of self-driving cars. Both visions agree that a smart car needs to process a massive amount of information in order to make real-time decisions.

One vision is that smart cars will be really smart and will include a lot of edge computing power and AI that will enable a car to make local decisions as the car navigates through traffic. Cars will likely to able to communicate with neighboring cars to coordinate vehicle spacing and stopping during emergencies. This vision requires only minimal demands for external broadband, except for perhaps to periodically update maps and to communicate with things like smart traffic lights.

The other vision of the future is that smart cars will beam massive amounts of data to and from the cloud that includes LiDAR imagery and GPS location information. Big data centers will then coordinate between vehicles. This second vision would require a massively robust broadband network everywhere.

I am surprised by the number of people who foresee the second version, with massive amounts of data transferred to and from the cloud. Here are just some of the reasons why this scenario is hard to imagine coming to fruition:

  • Volume of Data. The amount of data that would need to be transferred to the cloud is massive. It’s not hard to foresee a car needing to transmit terabytes of data during a trip if all of the decisions are made are made in a data center. Most prognosticators predict 5G as the technology that would support this network. One thing that seems to be ignored in these predictions is that almost no part of our current broadband infrastructure is able to handle this kind of data flow. We wouldn’t only need a massive 5G deployment, but almost every part of the existing fiber backbone network, down to the local level, would need to also be upgraded. It’s easy to fall into the trap that fiber can handle massive amounts of data, but the current electronics are not sized for this kind of data volumes.
  • Latency. Self-driving cars need to make instantaneous decisions and any delays of data going to and from the cloud will add delays. It’s hard to imagine any external network that can be as fast as a smart car making its own local driving decisions.
  • Migration Path. Even if the cloud is the ultimate network design, how do you get from here to there? We already have smart cars and they make decisions on-board. As that technology improves it doesn’t make sense that we would still pursue a cloud-based solution unless that solution is superior enough to justify the cost of migrating to the cloud.
  • Who will Build? Who is going to pay for the needed infrastructure? This means a 5G network built along every road. It means fiber built everywhere to support that network, including a massive beefing up of bandwidth on all existing fiber networks? Even the biggest ISPs don’t have both the financial wherewithal and the desire to tackle this kind of investment.
  • Who will Pay? And how is this going to get paid for? It’s easy to understand why cellular companies tout this vision as the future since they would be the obvious beneficiary of the revenues from such a network. But is the average family going to be willing to tack on an expensive broadband subscription for every car in the family? And does this mean that those who can’t afford a smart-car broadband connection won’t be able to drive? That’s a whole new definition of a digital divide.
  • Outages. We are never going to have a network that is redundant down to the street level. So what happens to traffic during inevitable fiber cuts or electronics failures?
  • Security. It seems sending live traffic data to the cloud creates the most opportunity for hacking to create chaos. The difficulty of hacking a self-contained smart car makes on-board computing sound far safer.
  • Who Runs the Smart-car Function? What companies actually manage this monstrous network? I’m not very enthused about the idea of having car companies operate the IT functions in a smart-car network. But this sounds like such a lucrative function I can’t foresee them handing this off to somebody else? There are also likely to be many network players involved and getting them all to perfectly coordinate sounds like a massively complex task.
  • What About Rural America? Already today we can’t figure out how to finance broadband in rural America. Getting broadband along every rural road is going to be equally as expensive as getting it to rural homes. Does this imply a smart-car network that only works in urban areas?

I fully understand why some in the industry are pushing this vision. This makes a lot of money for the wireless carriers and the vendors who support them. But the above list of concerns make it hard for me to picture the cloud vision. Doing this with on-board computers costs only a fraction of the cost of the big-network solution, and my gut says that dollars will drive the decision.

It’s also worth noting that we already have a similar example of this same kind of decision. The whole smart-city effort is now migrating to smart edge devices rather than exchanging massive data with the cloud. As an example, the latest technology for smart traffic control places smart processors at each intersection rather than sending full-time video to the cloud for processing. The electronics at a smart intersection will only communicate with the hub when it has something to report, like an accident or a car that has run a red light. That requires far less data, meaning far less demand for broadband than sending everything to the cloud. It’s hard to think that smart-cars – which will be the biggest source of raw data yet imagined – would not follow this same trend towards smart edge devices.

Facebook’s Gigabit WiFi Experiment

Facebook and the city of San Jose, California have been trying for several years to launch a gigabit wireless WiFi network in the downtown area of the city. Branded as Terragraph, the Facebook technology is a deployment of 60 GHz WiFi hotspots that promises data speeds as fast as a gigabit. This delays in the project are a good example of the challenges of launching a new technology and is a warning to anybody working on the cutting edge.

The network was first slated to launch by the end of 2016, but is now over a year late. The City or Facebook won’t commit on when the network will be launched, and they are also no longer making any guarantees of the speeds that will be achieved.

This delayed launch highlights many of the problems faced by a first-generation technology. Facebook first tested an early version of the technology on their Menlo Park campus, but has been having problems making it work in a real-life deployment. The deployment on light and traffic poles has gone much slower than anticipated, and Facebook is having to spend time after each deployment to make sure that traffic lights still work properly.

There are also business factors affecting the launch. Facebook has had turnover on the Terragraph team. The company has also gotten into a dispute over payments with an installation vendor. It’s not unusual to have business-related delays on a first-generation technology launch since the development team is generally tiny and subject to disruption and the distribution and vendor chains are usually not solidified. There is also some disagreement between the City and Facebook on who pays for the core electronics supporting the network.

Facebook had touted that the network would be significantly less expensive than deploying fiber. But the 60 GHz spectrum gets absorbed by oxygen and water vapor, so Facebook is having to deploy transmitters no more than 820 feet apart – a dense network deployment. Without fiber feeding each transmitter the backhaul is being done using wireless spectrum, which is likely to be contributing to the complication of the deployment as well as the lower expected data speeds.

For now, this deployment is in the downtown area and involves 250 pole-mounted nodes to serve a heavy-traffic business district which also sees numerous tourists. The City hopes to eventually find a way to deploy the technology citywide since 12% of the households in the City don’t currently have broadband access – mostly attributed to affordability. The City was hoping to get Google Fiber, but Google canceled plans last year to build in the City.

Facebook says they are still hopeful that they can make the technology work as planned, but that there is still more testing and research needed. At this point there is no specific planned launch date.

This experiment reminds me of other first-generation technology trials in the past. I recall several cities including Manassas, Virginia that deployed broadband over powerline. The technology never delivered speeds much greater than a few Mbps and never was commercially viable. I had several clients that nearly went bankrupt when trying to deploy point-to-point broadband using the LMDS spectrum. And I remember a number of failed trials to deploy citywide municipal WiFi, such as a disastrous trial in Philadelphia, and trials that fizzled in places like Annapolis, Maryland.

I’ve always cautioned my smaller clients to never be guinea pigs for a first-generation technology deployment. I can’t recall a time when a first-generation deployment did not come with scads of problems. I’ve seen clients suffer through first-generation deployments of all of the technologies that are now common – PON fiber, voice softswitches, IPTV, you name it. Vendors are always in a hurry to get a new technology to market and the first few ISPs that deploy a new technology have to suffer through all of the problems that crop up between a laboratory and a real-life deployment. The real victims of a first-generation deployment are often the customers using the network.

The San Jose trial won’t have all of the issues as are experienced by commercial ISPs since the service will be free to the public. But the City is not immune from the public spurning the technology if it doesn’t work as promised.

The problems experienced by this launch also provide a cautionary tale for the many 5G technology launches promised in 2018 and 2019. Every new launch is going to experience significant problems which is to be expected when a wireless technology bumps up against the myriad of issues experienced in a real-life deployment. If we have learned anything from the past, we can expect a few of the new launches to fizzle and die while a few of the new technologies and vendors will plow through the problems until the technology works as promised. But we’ve also learned that it’s not going to go smoothly and customers connected to an early 5G network can expect problems.

What’s New With Fiber Optics?

The companies that operate the long-haul fiber networks say that we are in danger of running out of bandwidth capacity on the major fiber routes between major Internet pops. The capacity of the current fiber optics along with the number of pairs of fiber between pops creates a finite maximum amount of bandwidth that can be transmitted – and with worldwide bandwidth usage still growing exponentially it’s not hard to foresee exhausting the capacity on key routes. We can always build new fibers, but it’s hard to build enough fibers anywhere to keep up with exponential growth.

But as expected, there are a number of new developments coming out of research that will probably let us stay ahead of the bandwidth curve. There is always a time delay between lab and manufacturer, but it’s good to know that there are breakthroughs on the way.

Frequency Combs. Engineers at San Diego’s Qualcomm Institute have developed a technique that could significantly improve the throughput on long-haul fiber routes. Today’s fiber technology works by transmitting multiple separate ‘colors’ of light operating simultaneously at different frequencies. But as more frequencies are jammed into a single fiber there is an increase in crosstalk, or interference between frequencies. This interference today limits the ‘power’ of the signal transmitted through a single fiber.

The Qualcomm engineers have developed a technique they are calling frequency combs. This technique grooms the outgoing light signal of each frequency so that the downstream interference is not random and can be predicted. And that is allowing them to then use an algorithm at the other end to detangle and interpret the scrambled data.

In a test this technique has created remarkable improvements. The engineers were able to increase the transmit power of the signal by 20-fold and then transmit the signal for 7,400 miles without the need for an optical regenerator. There is still work to be done, but this technique holds out great promise to be able to boost bandwidth on existing fibers.

Corkscrew Lasers. A team of scientists at the University of Buffalo’s School of Engineering and Applied Science have developed a new technique that can also increase the amount of bandwidth in a given fiber. They are taking advantage of a phenomenon that has been known for decades that takes advantage of the angular momentum to create what is called an optical vortex. This essentially creates the equivalent of a funnel cloud out of the light beam, which allows piling on more data onto a laser data stream.

For years it was always thought that this phenomenon would be impossible to control. But the team has been able to focus the vortex to a small enough point that can interface with existing computer components. The upside is that the vortex can transmit about ten times more data than a conventional linear laser beam, providing a boost of a full magnitude in laser power.

Air Fiber. A team at the University of Maryland has been able to create fiber-like data transmission feeds without using fiber. They are using a short powerful burst of four focused lasers to create a narrow beam they are calling a filament. The hot air expands around this filament creating a tube of low density air. This filament has a lower refractive index than the air around it and creates an effective mirrored tube – that can act just like a fiber optic filament.

The team has demonstrated in the lab that shooting four lasers to create the filament, followed by a short laser burst down the center of the filament, creates a temporary data pipe. The filament lasts only one-trillionth of a second, but the ensuing data beam lasts for several milliseconds – enough time to create a 2-way transmission path. The system would be used to create repeated filaments and this create a fiber path through the air.

For now the team has been able to make this work in the lab over a distance of a meter. Their next step is to move this to 50 meters. They think this theoretically could be used to transmit for long distances and could be used to create data paths in places where it’s too expensive to build fiber, and perhaps to transmit to objects in space.



Verizon Announces Residential 5G Roll-out

Verizon recently announced that it will be rolling out residential 5G wireless in as many as five cities in 2018, with Sacramento being the first market. Matt Ellis, Verizon’s CFO says that the company is planning on targeting 30 million homes with the new technology. The company launched fixed wireless trials in eleven cities this year. The trials delivered broadband wirelessly to antennas mounted in windows. Ellis says that the trials using millimeter wave spectrum went better than expected. He says the technology can achieve gigabit speeds over distances as great as 2,000 feet. He also says the company has had some success in delivering broadband without a true line-of-sight.

The most visible analyst covering this market is Craig Moffett of Moffett-Nathanson. He calls Verizon’s announcement ‘rather squishy’ and notes that there are no discussions about broadband speeds, products to be offered or pricing. Verizon has said that they would not deliver traditional video over these connections, but would use over-the-top video. There have been no additional product descriptions beyond that.

This announcement raises a lot of other questions. First is the technology used. As I look around at the various wireless vendors I don’t see any equipment on the market that comes close to doing what Verizon claims. Most of the vendors are talking about having beta gear in perhaps 2019, and even then, vendors are not promising affordable delivery to single family homes. For Verizon to deliver what it’s announced obviously means that they have developed equipment themselves, or quietly partnered on a proprietary basis with one of the major vendors. But there is no other ISP talking about this kind of deployment next year and so the question is if Verizon really has that big of a lead over the rest of the industry.

The other big question is delivery distance. The quoted 2,000 feet distance is hard to buy with this spectrum and that is likely the distance that has been achieved in a test in perfect conditions. What everybody wants to understand is the realistic distance to be used in deployments in normal residential neighborhoods with the trees and many other impediments.

Perhaps the most perplexing question is how much this is going to cost and how Verizon is going to pay for it. The company recently told investors that it does not see capital expenditures increasing in the next few years and may even see a slight decline. That does not jive with what sounds like a major and costly customer expansion.

Verizon said they chose Sacramento because the City has shown a willingness to make light and utility poles available for the technology. But how many other cities are going to be this willing (assuming that Sacramento really will allow this)? It’s going to require a lot of pole attachments to cover 30 million homes.

But even in Sacramento one has to wonder where Verizon is going to get the fiber needed to support this kind of network? It seems unlikely that the three incumbent providers – Comcast, Frontier and Consolidated Communications – are going to supply fiber to assist Verizon to compete with them. Since Sacramento is not in the Verizon service footprint the company would have to go through the time-consuming process needed to build fiber on their own – a process that the whole industry is claiming is causing major delays in fiber deployment. One only has to look at the issues encountered recently by Google Fiber to see how badly incumbent providers can muck up the pole attachment process.

One possibility comes to mind, and perhaps Verizon is only going to deploy the technology in the neighborhoods where it already has fiber-fed cellular towers. That would be a cherry-picking strategy that is similar to the way that AT&T is deploying fiber-to-the-premise. AT&T seems to only be building where they already have a fiber network nearby that can make a build affordable. While Verizon has a lot of cell sites, it’s hard to envision that a cherry-picking strategy would gain access to 30 million homes. Cherry-picking like this would also make for difficult marketing since the network would be deployed in small non-contiguous pockets.

So perhaps what we will see in 2018 is a modest expansion of this year’s trials rather than a rapid expansion of Verizon’s wireless technology. But I’m only guessing, as is everybody else other than Verizon.

Consolidation of Telecom Vendors

It looks like we might be entering a new round of consolidation of telecom vendors. Within the last year there have been the following announced consolidation among vendors:

  • Cisco is paying $5.5 billion for Broadsoft, a market leader in cloud services and software for applications like call centers.
  • ADTRAN purchased CommScope, a maker of EPON fiber equipment that is also DOCSIS compliant to work with cable networks.
  • Broadcom is paying $5.9 billion to buy Brocade Communications, a market leader in data storage devices as well as a range of telecom equipment.
  • Arris is buying Ruckus Wireless as part of a spinoff from the Brocade acquisition. Arris has a goal to be the provider of wireless equipment for the large cable TV companies.

While none of these acquisitions will cause any immediate impact on small ISPs, I’ve been seeing analysts predict that there is a lot of consolidation coming in the telecom vendor space. I think most of my clients were impacted to some degree by the last wave of vendor consolidation back around 2000. And that wave of consolidation impacted a lot of ISPs.

There are a number of reasons why the industry might be ripe for a round of mergers and acquisitions:

  • One important technology trend is the move by a lot of the largest ISPs, cable companies and wireless carriers to software defined networking. This means putting the brains to technology into centralized data centers which allows cheaper and simpler electronics at the edge. The advantages of SDN are huge for these big companies. For example, a wireless company could update the software in thousands of cell sites simultaneously instead having to make upgrades at each site. But SDN means less costly and complicated gear.
  • The biggest buyers of electronics are starting to make their own gear. For example, the operators of large data centers like Facebook are working together under the Open Compute Project to create cheap routers and switches for their data centers, which is tanking Cisco’s switch business. In another example, Comcast has designed its own settop box.
  • The big telcos have made it clear that they are going to be backing out of the copper business. In doing so they are going to drastically cut back on the purchase of gear used in the last mile network. This hurts the vendors that supply much of the electronics for the smaller telcos and ISPs.
  • I think we will be seeing an overall shift over the next few decades of more customers being served by cable TV and wireless networks. Spending on electronics in those markets will benefit few small ISPs.
  • There are not a lot of vendors left in the industry today, and so every merger means a little less competition. Just consider FTTH equipment. Fifteen years ago there was more than a dozen vendors working in this space, but over time that has cut in half.

There are a number of reasons why these trends could foretell future trouble for smaller ISPs, possibly within the next decade:

  • Smaller ISPs have always relied on bigger telcos to pave the way in developing new technology and electronics. But if the trend is towards SDN and towards large vendors designing their own gear then this will no longer be the case. Consider FTTP technology. If companies like Verizon and AT&T shift towards software defined networking and electronics developed through collaboration there will be less development done with non-SDN technology. One might hope that the smaller companies could ride the coattails of the big telcos in an SDN environment – but as each large telco develops their own proprietary software to control SDN networks that is likely to not be practical.
  • Small ISPS also rely on larger vendors to buy enough volume of electronics to hold down prices. But as the big companies buy fewer standard electronics the rest of us use you can expect either big price increases or, worse yet, no vendors willing to serve the smaller carrier market. It’s not hard to envision smaller ISPs reduced to competing in the grey market for used and reconditioned gear – something some of my clients already do who are operating ten-year old FTTP networks.

I don’t want to sound like to voice of gloom and I expect that somebody will step into voids created by these trends. But that’s liable to mean smaller ISPs will end up relying on foreign vendors that will not come with the same kinds of prices, reliability or service the industry is used to today.