New Technology – October 2017

I’ve run across some amazing new technologies that hopefully will make it to market someday.

Molecular Data Storage. A team of scientists at the University of Manchester recently made a breakthrough with a technology that allows high volumes of data to be stored within individual molecules. They’ve shown the ability to create high-density storage that could save 25,000 gigabits of data on something the size of a quarter.

They achieved the breakthrough using molecules that contain the element dysprosium (that’s going to send you back to the periodic table) cooled to a temperature of -213 centigrade. At that temperature the molecules retain magnetic alignment. Previously this has taken molecules cooled to a temperature of -259 C. The group’s goal is to find a way to do this at -196 C, the temperature of affordable liquid nitrogen, which would make this a viable commercial technology.

The most promising use of this kind of dense storage would be in large data centers since this storage is 100 times more dense than existing technologies. This would make data centers far more energy efficient while also speeding up computing. This kind of improvement since there are predictions that within 25 years data centers will be the largest user of electricity on the planet.

Bloodstream Electricity. Researchers at Fudan University in China have developed a way to generate electricity from a small device immersed in the bloodstream. The device uses stationary nanoscale carbon fibers that act like a tiny hydropower generator. They’ve named the device as ‘fiber-shaped fluidic nanogenerator” (FFNG).

Obviously there will need to be a lot of testing to make sure that the devices don’t cause problems like blood clots. But the devices hold great promise. A person could use these devices to charge a cellphone or wearable device. They could be used to power pacemakers and other medical devices. They could be inserted to power chips in farm animals that could be used to monitor and track them, or used to monitor wildlife.

Light Data Storage. Today’s theme seems to be small, and researchers at Caltech have developed a small computer chip that is capable of temporarily storing data using individual photons. This is the first team that has been able to reliably capture photons in a readable state on a tiny device. This is an important step in developing quantum computers. Traditional computers store data as either a 1 or a 0, but quantum computers store also can store data that is both a 1 and 0 simultaneously. This has shown to be possible with photons.

Quantum computing devices need to be small and operate at the nanoscale because they hold data only fleetingly until it can be processed, and nanochips can allow rapid processing. The Caltech device is small around the size of a red blood cell. The team was able to store a photon for 75 nanoseconds, and the ultimate goal is to store information for a full millisecond.

Photon Data Transmission. Researchers at the University of Ottowa have developed a technology to transmit a secure message using photons that are carrying more than one bit of information. This is a necessary step in developing data transmission using light, which would free the world from the many limitations of radio waves and spectrum.

Radio wave data transmission technologies send one bit of data at a time with each passing wavelength. Being able to send more than one bit of data with an individual proton creates the possibility of being able to send massive amounts of data through the open atmosphere. Scientists have achieved the ability to encode multiple bits with a proton in the lab, but is the first time it’s been done through the atmosphere in a real-world application.

The scientists are now working on a trial between two locations that are almost three miles apart and that will use a technology they call adaptive optics that can compensate for atmospheric turbulence.

There are numerous potential uses for the technology in our industry. This could be used to create ultrahigh-speed connections between a satellite and earth. It could be used to transmit data without fiber between locations with a clear line-of-sight. It could used as a secure method of communications with airplanes since small light beams can’t be intercepted or hacked.

The other use of the technology is to leverage the ability of photons to carry more than one bit of data to create a new kind of encryption that should be nearly impossible to break. The photon data transmission allows for the use of 4D quantum encryption to carry the keys needed to encrypt and decrypt packets, meaning that every data packet could use a different encryption scheme.

Cable Systems Aren’t All Alike

Big cable companies all over the country are upgrading their networks to DOCSIS 3.1 and announcing that they will soon have gigabit broadband available. Some networks have already been upgraded and we are seeing gigabit products and pricing springing up in various markets around the country. But this does not mean that all cable networks are going to be capable of gigabit speeds, or even that all cable networks are going to upgrade to DOCIS 3.1. As the headline of this blog says, all cable systems aren’t alike. Today’s blog looks at what that means as it applies to available broadband bandwidth.

A DOCSIS cable network is effectively a radio network that operates only inside the coaxial cable. This is why you will hear cable network capacity described using megahertz, which is a measure of the frequency of a radio transmission. Historically cable networks came in various frequency sizes such as 350 MHz, 650 MHz or 1,000 MHz.

The size of the available frequency, in megahertz, describes the capacity of the network to carry cable TV channels or broadband. Historically one analog TV channel uses about 6 MHz of frequency – meaning that a 1,000 MHz system can transmit roughly 167 channels of traditional analog TV.

Obviously cable networks carry more channels than this, which is why you’ve seen cable companies upgrade to digital system. The most commonly used digital compression scheme can squeeze six digital channels into the same frequency that carries one analog channel. There are new compression techniques that can squeeze in even more digital channels into one slot.

In a cable network each slice of available frequency can be used to either transmit either TV channels or else be used for broadband. If a cable companies wants more broadband capacity they must create room for the broadband by reducing the number of slots used for TV.

It is the overall capacity of the cable network along with the number of ‘empty’ channel slots that determine how much broadband the network can deliver to customers. A cable system needs roughly 24 empty channel slots to offer gigabit broadband download speeds. It’s a lot harder to carve out enough empty channels on smaller capacity networks. An older cable system operating at 650 MHz has significantly less capacity for broadband than a newer urban system operating at 1,000 MHZ or greater capacity.

One of the primary benefits of DOCSIS 3.1 is the ability to combine any number of empty channels into a signal broadband stream. But the task of upgrading many older networks to DOCSIS 3.1 is not just a simple issue of upgrading the electronics. If a cable company wants the faster broadband speeds they need to also upgrade the overall capacity of the network. And the upgrade from 350 MHz or 650 MHz to 1,000 MHz is often expensive.

The higher capacity network has different operating characteristics that affect the outside cable plant. For example, the placement and spacing of cable repeaters and power taps is different in a higher frequency network. In some cases the coaxial cable used in an older cable networks can’t handle the higher frequency and must be replaced. So upgrading an older cable network to get faster speeds often means making a lot of changes in the physical cable plant. To add to the cost, this kind of upgrade also usually means having to change out most or all of the cable settop boxes and cable modems – an expensive undertaking when every customer has multiple devices.

The bottom line of all of this is that it’s not necessarily cheap or easy to upgrade older or lower-capacity cable networks to provide faster broadband. It takes a lot more than upgrading the electronics to get faster speeds and often means upgrades the physical cable plant and replacement of settop boxes and cable modems. Cable operators with older networks have to do a cost/benefit analysis to see if it’s worth the upgrade cost to get faster broadband. Since most older cable systems are in rural small towns, this is one more hurdle that must be overcome to provide faster broadband in rural America.

CAF II and Wireless

Frontier Communications just announced that they are testing the use of wireless spectrum to complete the most rural portions of their CAF II build-out requirement. The company accepted $283 million per year for six years ($1.7 billion total) to upgrade broadband to 650,000 rural homes and businesses. That’s a little over $2,600 per location passed. The CAF II program requires that fund recipients increase broadband to speeds of at least 10 Mbps down and 1 Mbps up.

Frontier will be using point-to-multipoint radios where a transmitter is mounted on a tower with the broadband signal then sent to a small antenna at each customer’s location. Frontier hasn’t said what spectrum they are using, but in today’s environment it’s probably a mix of 2.4 GHz and 5 GHz WiFi spectrum and perhaps also some 3.65 GHz licensed spectrum. Frontier, along with CenturyLink and Consolidated told the FCC a year ago that they would be interested in using the spectrum in the ‘citizens’ radio band’ between 3.7 MHz and 4.2 MHz for this purpose. The FCC opened a docket looking into this spectrum in August and comments in that docket were due to the FCC last week.

I have mixed feelings about using federal dollars to launch this technology. On the plus side, if this is done right this technology can be used to deliver bandwidth up to 100 Mbps, but in a full deployment speeds can be engineered to deliver consistent 25 Mbps download speeds. But those kinds of speeds require an open line-of-sight to customers, tall towers that are relatively close to customers (within 3 – 4 miles) and towers that are fiber fed.

But when done poorly the technology delivers much slower broadband. There are WISPs using the technology to deliver speeds that don’t come close to the FCC’s 10/1 Mbps requirement. They often can’t get fiber to their towers and they will often serve customers that are much further than the ideal distance from a tower. Luckily there are many other WISPs using the technology to deliver great rural broadband.

The line-of-sight issue is a big one and this technology is a lot harder to make work in places with lots of trees and hills, making it a difficult delivery platform in Appalachia and much of the Rockies. But the technology is being used effectively in the plains and open desert parts of the country today.

I see downsides to funding this technology with federal dollars. The primary concern is that the technology is not long-lived. The electronics are not generally expected to last more than seven years and then the radios must be replaced. Frontier is using federal dollars to get this installed, and I am sure that the $2,600 per passing is enough to completely fund the deployment. But are they going to keep pouring capital into replacing radios regularly over time? If not, these deployments would be a sick joke to play on rural homes – giving them broadband for a few years until the technology degrades. It’s hard to think of a worse use of federal funds.

Plus, in many of areas where the technology is useful there are already WISPs deploying point-to-multipoint radios. It seems unfair to use federal dollars to compete against firms who have made private investments to build the identical technology. The CAF money ought to be used to provide something better.

I understand Frontier’s dilemma. In the areas where they took CAF II money they are required to serve everybody who doesn’t have broadband today. My back-of-the envelope calculations tells me that the CAF money was not enough for them to extend DSL into the most rural parts of the CAF areas since extending DSL means building fiber to feed the DSLAMs.

As I have written many times I find the whole CAF program to be largely a huge waste of federal dollars. Using up to $10 billion to expand DSL, point-to-multipoint, and in the case of AT&T cellular wireless is a poor use of our money. That same amount of money could have seeded matching broadband that could be building a lot of fiber to these same customers. We only have to look at state initiatives like the DEED grants in Minnesota to see that government grant money induces significant private investment in fiber. And as much as the FCC doesn’t want to acknowledge it, building anything less than fiber is nothing more than a Band-aid. We can and should do better.

The Next Big Broadband Application

Ever since Google Fiber and a few municipalities began building gigabit fiber networks people have been asking how we are going to use all of that extra broadband capability. I remember a few years ago there were several industry contests and challenges to try to find the gigabit killer app.

But nobody has found one yet and probably won’t for a while. After all, a gigabit connection is 40 times faster than the FCC’s current definition of broadband. I don’t think Google Fiber or anybody thought that our broadband needs would grow fast enough to quickly fill such a big data pipe. But year after year we all keep using more data, and since the household need for broadband keeps doubling every three years it won’t take too many doublings for some homes to start filling up larger data connections.

But there is one interesting broadband application that might be the next big bandwidth hog. Tim Cook, the CEO of Apple, was recently on Good Morning America and he said that he thinks that augmented reality is going to be a far more significant application in the future than virtual reality and that once perfected that it’s going to be something everybody is going to want.

By now many of you have tried virtual reality. You don a helmet of some kind and are then transported into some imaginary world. The images are in surround-3D and the phenomenon is amazing. And this is largely a gaming application and a solitary one at that.

But augmented reality brings virtual images out into the real world. Movie directors have grasped the idea and one can hardly watch a futuristic show or movie without seeing a board room full of virtual people who are attending a meeting from other locations.

And that is the big promise of virtual reality. It will allow telepresence – the ability for people to sit in their home or office and meet and talk with others as if they are in the same room. This application is of great interest to me because I often travel to hold a few hour meetings and the idea of doing that from my house would add huge efficiency to my business life. Augmented reality could spell the end of the harried business traveler.

But the technology has far more promise than that. With augmented reality people can share any other images. You can share a sales presentation or share videos from your latest vacation with grandma. This ability to share images between people could drastically change education, and some predict that over a few decades that augmented reality would begin to obsolete the need for classrooms full of in-person students. This technology would fully enable telemedicine. Augmented reality will enhance aging in the home since shut-ins could still have a full social life.

And of course, the application that intrigues everybody is using augmented reality for entertainment. Taken to the extreme, augmented reality is the Star Trek holodeck. There are already first-generation units that can create a virtual landscape in your living room. It might take a while until the technology gets as crystal clear and convincing as the TV holodeck, but even having some percentage of that capability opens up huge possibilities for gaming and entertainment.

As the quality of augmented reality improves, the technology is going to require big bandwidth connections with a low latency. Rather than just transmitting a 2D video file, augmented reality will be transmitting 3D images in real time. Homes and offices that want to use the technology are going to want broadband connections far faster than the current 25/3 Mbps definition of broadband. Augmented reality might also be the first technology that really pushes the demand for faster upload speeds since they are as necessary as download speeds in enabling a 2-way augmented reality connection.

This is not a distant future technology and a number of companies are working on devices that will bring the first-generation of the technology into homes in the next few years. And if we’ve learned anything about technology, once a popular technology is shown to work, demand in the marketplace there will be numerous companies vying to improve the technology.

If augmented reality was here today the biggest hurdle to using it would be the broadband connections most of us have today. I am certainly luckier than people in rural areas and I have a 60/5 Mbps connection with a cable modem from Charter. But the connection has a lot of jitter and the latency swings wildly. My upload stream is not going to be fast enough to support 2-way augmented reality.

The economic benefits from augmented reality are gigantic. The ability for business people to easily meet virtually would add significant efficiency to the economy. The technology will spawn a huge demand for content. And the demand to use the technology might be the spur that will push ISPs to build faster networks.

Measuring Mobile Broadband Speeds

I was using Google search on my cellphone a few days ago and I thought my connect time was sluggish. That prompted me to take a look at the download speeds on cellular networks, something I haven’t checked in a while.

There are two different companies that track and report on mobile data speeds, and the two companies report significantly different results. First is Ookla, which offers a speed test for all kinds of web connections. Their latest US speed test results represent cellphone users who took their speed test in the first half of this year. Ookla reports that US cellular download speeds have increased 19% over the last year and are now at an average of 22.69 Mbps. They report that the average upload speeds are 8.51 Mbps, an improvement of 4% over the last year. Ookla also found that rural mobile broadband speeds are 20.9% slower at urban speeds and are at an average of 17.93 Mbps.

The other company tracking mobile broadband speeds reports a different result. Akamai reports that the average cellular download speed for the whole US was 10.7 Mbps for the first quarter of 2017, less than half of the result shown by Ookla.

This is the kind of difference that can have you scratching your head. But the difference is significant since cellular companies widely brag about the higher Ookla numbers, and these are the numbers that end up being shown to regulators and policy makers.

So what are the differences between the two numbers? The Ookla numbers are the results of cellphone users who voluntarily take their speed test. The latest published numbers represent tests from 3 million cellular devices (smartphones and tablets) worldwide. The Akamai results are calculated in a totally different way. Akamai has monitoring equipment at a big percentage of the world’s internet POPs and they measure the actual achieved speeds of all web traffic that comes through these POPs. They measure the broadband being used on all of the actual connections they can see (which in the US is most of them).

So why would these results be so different and what are the actual mobile broadband speeds in the US? The Ookla results are from speed tests, which last less than a minute. So Ookla speed test measures the potential speed that a user could theoretically achieve on the web. It’s a test of the full bandwidth capability of the connection. But this is not necessarily the actual results for cellphone users for a few reasons:

  • Cellphone providers and many other ISPs often provide a burst of speeds for the first minute or two of a broadband connection. Since the vast majority of web events are short-term events this provides users with greater speeds than would be achieved if they measured the speed over a longer time interval. Even with a speed test you often can notice the speed tailing off by the end of the test – this is the ‘burst’ slowing down.
  • Many web experts have suspected that the big ISPs provide priority routing for somebody taking a speed test. This would not be hard to do since there are only a few commonly used speed test sites. If priority routing is real, then speed test results are cooked to be higher than would be achieved when connecting to other web sites.

The Akamai numbers also can’t be used without some interpretation. They are measuring achieved speeds, which means the actual connection speeds for mobile web connections. If somebody is watching a video on their cellphone, then Akamai would be measuring the speed of that connection, which is not the same as measuring the full potential speed for that same cellphone.

The two companies are measuring something totally different and the results are not comparable. But the good news is that both companies have been tracking the same things for years and so they both can see the changes in broadband speeds. They also both measure speeds around the world and are able to compare US speeds with others. But even that makes for an interesting comparison. Ookla says that US mobile speed test results are 44th in a world ranking. That implies that the mobile networks in other countries make faster connections. Akamai didn’t rank the countries, but the US is pretty far down the list. A lot of countries in Europe and Asia have faster actual connection speeds than the US, and even a few countries in Africa like Kenya and Egypt are faster than here. My conclusion from all of this is that ‘actual’ speeds are somewhere between the two numbers. But I doubt we’ll ever know. The Akamai numbers, though, represent what all cell users in aggregate are actually using, and perhaps that’s the best number.

But back to my own cellphone, which is what prompted me to investigate this. Using the Ookla speed test I showed a 13 Mbps download and 5 Mbps upload speed. There was also a troublesome 147 ms of latency, which is probably what is accounting for my slow web experience. But I also learned how subjective these speeds are. I walked around the neighborhood and got different results as I changed distances from cell towers. This was a reminder that cellular data speeds are locally specific and that the distance you are from a cell site is perhaps the most important factor in determining your speed. And that means that it’s impossible to have a meaningful talk about mobile data speeds since they vary widely within the serving area of every cell site in the world.

Do We Really Need Gigabit Broadband?

I recently read an article in LightReading titled “All That’s Gigabit Doesn’t Glitter.” The article asks the question if the industry really needs to make the leap to gigabit speeds. It talks about the industry having other options that can satisfy broadband demand but that telco executives get hooked into the gigabit advertising and want to make the gigabit claim. A few of the points made by the article are thought-provoking and I thought today I’d dig deeper into a few of those ideas.

The big question of course is if telco providers need to be offering gigabit speeds, and it’s a great question. I live in a cord cutter family and I figure that my download needs vary between 25 Mbps and 50 Mbps at any given time (look forward to a blog soon that demonstrates this requirement). I can picture homes with more than our three family members needing more since the amount of download speed needed is largely a factor of the number of simultaneous downloads. And certainly there are people who work at home in data intensive jobs that need far more than this.

There is no doubt that a gigabit is a lot more broadband than I need. If we look at my maximum usage need of 50 Mbps then a gigabit is 20 times more bandwidth capacity than I am likely to need. But I want to harken back to our broadband history to talk about the last time we saw a 20-fold increase in available bandwidth.

A lot of my readers are old enough to remember the agony of working on dial-up Internet. It could take as much as a minute at 56 kbps just to view a picture on the Internet. And we all remember the misery that came when you would start a software update at bedtime and pray that the signal didn’t get interrupted during the multi-hour download process.

But then along came 1 Mbps DSL. This felt like nirvana and it was 20 time faster than dial-up. We were all so excited to get a T1 to our homes. And as millions quickly upgraded to the new technology the services on the web upped their game. Applications became more bandwidth intensive, program downloads grew larger, web sites were suddenly filled with pictures that you didn’t have to wait to see.

And it took a number of years for that 1 Mbps connection to be used to capacity. After all, this was a 20-fold increase in bandwidth and it took a long time until households began to download enough simultaneous things to use all of that bandwidth. But over time the demand for web broadband kept growing. As cable networks upgraded to DOCSIS 3.0 the web started to get full of video and eventually the 1 Mbps DSL connection felt as bad as dial-up a decade before.

And this is perhaps the major point that the article misses – you can’t just look at today’s needed usage to talk about the best technology. Since 1980 we’ve experienced a doubling of the amount of download speeds needed by the average household every three years. There is no reason to think that growth is stopping, and so any technology that is adequate for a home today is going to feel sluggish in a decade and obsolete in two decades. We’ve now reached that point with older DSL and cable modems that have speeds under 10 Mbps.

The other point made by the article is that there are technology steps between today’s technology and gigabit speeds. There are improved DSL technologies and G.Fast that could get another decade out of embedded copper and could be competitive today.

But it’s obvious that the bigger telcos don’t want to invest in copper. I get the impression that if AT&T found an easy path to walk away from all copper they’d do so in a heartbeat. And none of the big companies have done a good job of maintaining copper and most of it is in miserable shape. So these companies are not going to be investing in G.Fast, although as a fiber-to-the-curb technology it would be a great first step towards modernizing their networks to be all-fiber. CenturyLink, AT&T and others are considering G.Fast as a technology to boost the speeds in large apartment buildings, but none of them are giving any serious consideration of upgrading residential copper plant.

It’s also worth noting that not all companies with fiber bit on the gigabit hype. Verizon always had fast products on their FiOS and had the fastest speed in the industry of 250 Mbps for many years. They only recently decided to finally offer a gigabit product.

And this circles back to the question of whether homes need gigabit speeds. The answer is clearly no, and almost everybody offering a gigabit product will tell you that it’s still largely a marketing gimmick. Almost any home that buys a gigabit would have almost the same experience on a fiber-based 100 Mbps product with low fiber latency.

But there are no reasonable technologies in between telephone copper and fiber. No new overbuilder or telco is going to build a coaxial cable network and so there is no other choice than building fiber. While we might not need gigabit speeds today for most homes, give us a decade or two and most homes will grow into that speed, just as we grew from dial-up to DSL. The gigabit speed marketing is really not much different than the marketing of DSL when it first came out. My conclusion after thinking about this is that we don’t need gigabit speeds, but we do need gigabit capable networks – and that is not hype.

What’s the Next FTTP Technology?

There is a lot of debate within the industry about the direction of the next generation of last mile fiber technology. There are three possible technologies that might be adopted as the preferred next generation of electronics – NG-PON2, XGS-PON or active Ethernet. All of these technologies are capable of delivering 10 Gbps streams to customers.

Everybody agrees that the current widely deployed GPON is starting to get a little frayed around the edges. That technology delivers 2.4 Gbps downstream and 1 Gbps upstream for up to 32 customers, although most networks I work with are configured to serve 16 customers at most. All the engineers I talk to think this is still adequate technology for residential customers and I’ve never heard of a neighborhood PON being maxed out for bandwidth. But many ISPs already use something different for larger business customers that demand more bandwidth than a PON can deliver.

The GPON technology is over a decade old, which generally is a signal to the industry to look for the next generation replacement. This pressure usually starts with vendors who want to make money pushing the latest and greatest new technology – and this time it’s no different. But after taking all of the vendor hype out of the equation it’s always been the case that any new technology is only going to be accepted once that new technology achieves and industry-wide economy of scale. And that almost always means being accepted by at least one large ISP. There are a few exceptions to this, like what happened with the first generation of telephone smart switches that found success with small telcos and CLECs first – but most technologies go nowhere until a vendor is able to mass manufacture units to get the costs down.

The most talked about technology is NG-PON2 (next generation passive optical network). This technology works by having tunable lasers that can function at several different light frequencies. This would allow more than one PON to be transmitted simultaneously over the same fiber, but at different wavelengths. But that makes this a complex technology and the key issue is if this can ever be manufactured at price points that can match other alternatives.

The only major proponent of NG-PON2 today is Verizon which recently did a field trial to test the interoperability of several different vendors including Adtran, Calix, Broadcom, Cortina Access and Ericsson. Verizon seems to be touting the technology, but there is some doubt if they alone can drag the rest of the industry along. Verizon seems enamored with the idea of using the technology to provide bandwidth for the small cell sites needed for a 5G network. But the company is not building much new residential fiber. They announced they would be building a broadband network in Boston, which would be their first new construction in years, but there is speculation that a lot of that deployment will use wireless 60 GHz radios instead of fiber for the last mile.

The big question is if Verizon can create an economy of scale to get prices down for NG-PON2. The whole industry agrees that NG-PON2 is the best technical solution because it can deliver 40 Gbps to a PON while also allowing for great flexibility in assigning different customers to different wavelengths. But the best technological solution is not always the winning solution and the concern for most of the industry is cost. Today the early NG-PON2 electronics is being priced at 3 – 4 times the cost of GPON, due in part to the complexity of the technology, but also due to the lack of economy of scale without any major purchaser of the technology.

Some of the other big fiber ISPs like AT&T and Vodafone have been evaluating XGS-PON. This technology can deliver 10 Gbps downstream and 2.5 Gbps upstream – a big step up in bandwidth over GPON. The major advantage of the technology is that is uses a fixed laser which is far less complex and costly. And unlike Verizon, these two companies are building a lot more FTTH networks that Verizon.

And while all of this technology is being discussed, ISPs today are already delivering 10 Gbps data pipes to customers using active Ethernet (AON) technology. For example, US Internet in Minneapolis has been offering 10 Gbps residential service for several years. The active Ethernet technology uses lower cost electronics than most PON technologies, but still can have higher costs than GPON due to the fact that there is a dedicated pair of lasers – one at the core and one at the customer site – for each customer. A PON network instead uses one core laser to serve multiple customers.

It may be a number of years until this is resolved because most ISPs building FTTH networks are still happily buying and installing GPON. One ISP client told me that they are not worried about GPON becoming obsolete because they could double the capacity of their network at any time by simply cutting the number of customers on a neighborhood PON in half. That would mean installing more cards in the core without having to upgrade customer electronics.

From what everybody tells me GPON networks are not experiencing any serious problems. But it’s obvious as the household demand for broadband keeps doubling every three years that the day will come when these networks will experience blockages. But creative solutions like splitting the PON could keep GPON working great for a decade or two. And that might make GPON the preferred technology for a long time, regardless of the vendors strong desire to get everybody to pay to upgrade existing networks.

G.Fast over Coax

There is yet another new technology available to carriers – G.Fast over coaxial cable. Early trials of the technology show it works better than G.Fast over telephone copper.

Calix recently did a test of the new coaxial technology and was able to deliver 500+ Mbps for up to 2,000 feet. This is far better than current G.Fast technology over copper which can handle similar data speeds up to about 800 feet. But telephone G.Fast is improving and Calix just demonstrated a telephone copper G.Fast that can deliver 1 Gbps for about 750 feet.

But achieving the kinds of speeds demonstrated by Calix requires a high-quality telephone copper network. We all know that the existing telephone and coaxial networks in existing buildings are usually anything but pristine. Many existing coaxial cables in places like apartment buildings have been cut and re-spliced numerous times over the years, which will significantly degrade G.Fast performance.

This new technology is definitely going to work best in niche applications – and there may be situations where it’s the clearly best technology for the price. There are a surprising number of coaxial networks in place in homes, apartment buildings, schools, factories and older office buildings that might be good candidates for the technology.

A number of telcos like CenturyLink and AT&T are starting to use G.Fast over telephone copper to distribute broadband to apartment buildings. Since as the incumbent telephone company they can make sure that these networks are available to them. But there might be many apartment buildings where the existing coaxial network could be used instead. The ability to go up to 2,000 feet could make a big difference in larger apartment buildings.

Another potential use would be in schools. However, with the expanding demand for broadband in classrooms one has to wonder if 500 Mbps is enough bandwidth to serve and share among a typical string of classrooms – each with their own heavy broadband demand.

There are also a lot of places that have coaxial networks that you might not think about. For example, coaxial wiring was the historic wiring of choice for the early versions of video surveillance cameras in factories and other large businesses. It would not be hard to add WiFi modems to this kind of network. There are tons of older hotels with end-to-end coaxial networks. Any older office buildings is likely to have coaxial wiring throughout.

But there is one drawback for the technology in that the coaxial network can’t be carrying a cable TV signal at the same time. The coaxial G.Fast operates at the same frequencies as a significant chunk of a traditional DOCSIS cable network. To use the technology in a place like an apartment would mean that the coaxial wiring can no longer be used for cable TV delivery. Or it means converting the cable TV signal to IPTV to travel over the G.Fast. (but that wouldn’t leave much bandwidth for broadband.) But still, there are probably many unused coaxial wiring networks and the technology could use them with very little required rewiring.

It’s more likely that the coaxial G.Fast could coexist with existing applications in places like factories. Those networks typically use MoCA to feed the video cameras, at frequencies that are higher than DOCSIS cable networks.

But my guess is that the interference issue will be a big one for many potential applications. Most apartments and schools are going to still be using their networks to deliver traditional video. And many other coaxial networks will have been so chopped up and re-spliced over time to present a real challenge for the technology.

But this is one more technology to put into the toolbox, particularly for companies that bring broadband to a lot of older buildings. There are probably many cases where this could be the most cost effective solution.

More Pressure on WiFi

As if we really needed more pressure put onto our public WiFi spectrum, both Verizon and AT&T are now launching Licensed Assisted Access (LAA) broadband for smartphones. This is the technology that allows cellular carriers to mix LTE spectrum with the unlicensed 5 GHz spectrum for providing cellular broadband. The LAA technology allows for the creation of ‘fatter’ data pipes by combining multiple frequencies, and the wider the data pipe the more data that makes it to the end-user customer.

When carriers combine frequencies using LAA they can theoretically create a data pipe as large as a gigabit while only using 20 MHz of licensed frequency. The extra bandwidth for this application comes mostly from the unlicensed 5 GHz band and is similar to the fastest speeds that we can experience at home using this same frequency with 802.11AC. However, such high-speed bandwidth is only useful for a short distance of perhaps 150 feet and the most practical use of LAA is to boost cellphone data signals for customers closest to a cell tower. That’s going to make LAA technology most beneficial in dense customer environments like busy downtown areas, stadiums, etc. LAA isn’t going to provide much benefit to rural cellphone towers or those along interstate highways.

Verizon recently did a demonstration of the LAA technology that achieved a data speed of 953 Mbps. They did this using three 5 GHz channels combined with one 20 megahertz channel of AWS spectrum. Verizon used a 4X4 MIMO (multiple input / multiple output) antenna array and 256 QAM modulation to achieve this speed. The industry has coined the new term of four-carrier aggregation for the technology since it combines 4 separate bands of bandwidth into one data pipe. A customer would need a specialized MIMO antenna to receive the signal and also would need to be close to the transmitter to receive this kind of speed.

Verizon is starting to update selected cell sites with the technology this month. AT&T has announced that they are going to start introducing LAA technology along with 4-way carrier aggregation by the end of this year. It’s important to note that there is a big difference between the Verizon test with 953 Mbps speeds and what customers will really achieve in the real world. There are numerous factors that will limit the benefits of the technology. First, there aren’t yet any handsets with the right antenna arrays and it’s going to take a while to introduce them. These antennas look like they will be big power eaters, meaning that handsets that try to use this bandwidth all of the time will have short battery lives. But there are more practical limitations. First is the distance limitation and many customers will be out of range of the strongest LAA signals. A cellular company is also not going to try to make this full data connection using all 4 channels to one customer for several reasons, the primary one being the availability of the 5 GHz frequency.

And that’s where the real rub comes in with this technology. The FCC approved the use of this new technology last year. They essentially gave the carriers access to the WiFi spectrum for free. The whole point of licensed spectrum is to provide data pipes for all of the many uses not made by licensed wireless carriers. WiFi is clearly the most successful achievement of the FCC over the last few decades and providing big data pipes for public use has spawned gigantic industries and it’s hard to find a house these days without a WiFi router.

The cellular carriers have paid billions of dollars for spectrum that only they can use. The rest of the public uses a few bands of ‘free’ spectrum, and uses it very effectively. To allow the cellular carriers to dip into the WiFi spectrum runs the risk of killing that spectrum for all of the other uses. The FCC supposedly is requiring that the cellular carriers not grab the 5 GHz spectrum when it’s already busy in use. But to anybody that understands how WiFi works that seems like an inadequate protection, because any of the use of this spectrum causes interference by definition.

In practical use if a user can see three or more WiFi networks they experience interference, meaning that more than one network is trying to use the same channel at the same time. It is the nature of this interference that causes the most problems with WiFi performance. When two signals are both trying to use the same channel, the WiFi standard causes all competing devices to go quiet for a short period of time, and then both restart and try to grab an open channel. If the two signals continue to interfere with each other, the delay time between restarts increases exponentially in a phenomenon called backoff. As there are more and more collisions between competing networks, the backoff increases and the performance of all devices trying to use the spectrum decays. In a network experiencing backoff the data is transmitted in short bursts between the times that the connection starts and stops from the interference.

And this means that when the cellular companies use the 5 GHz spectrum they will be interfering with the other users of that frequency. That’s what WiFi was designed to do and so the interference is unavoidable. This means other WiFi users in the immediate area around an LAA transmitter will experience more interference and it also means a degraded WiFi signal for the cellular users of the technology – and they reason they won’t get speeds even remotely close to Verizon’s demo speeds. But the spectrum is free for the cellular companies and they are going to use it, to the detriment of all of the other uses of the 5 GHz spectrum. With this decision the FCC might well have nullified the tremendous benefits that we’ve seen from the 5 GHz WiFi band.

FCC Takes a New Look at 900 MHz

The FCC continues its examination of the best use of spectrum and released a Notice of Inquiry on August 4 looking at the 900 MHz band of spectrum. They want to know if there is some better way to use the spectrum block. They are specifically looking at the spectrum between 896–901 MHz and 935-940 MHz.

The FCC first looked at this frequency in 1986 and the world has changed drastically since then. The frequency is currently divided into 399 narrowband channels grouped into 10-channel blocks. This licensed use of the spectrum varies by MTA (Major Trading Area), where channels have been allocated according to local demand from commercial users.

One of the more common uses of the spectrum is for SMR service (Specialized Mobile Radio), which is the frequency that’s been used in taxis and other vehicle fleets for many years. The other use is more commonly referred to as B/ILT purposes (Business/Industrial Land Transportation). This supports radios in work fleets, and is used widely to monitor and control equipment (such as monitoring water pumps in a municipal water system). The frequency was also widely used historically for public safety / police networks using push-button walkie-talkies (although cellphones have largely taken over that function).

The FCC currently identifies 2,700 sites used by 500 licensees in the country that are still using B/ILT radios and technologies. These uses include security at nuclear power plants including public alert notifications, flood warning systems, smart grid monitoring for electric networks, and for monitoring petroleum refineries and natural gas distribution systems.

But we live in a bandwidth hungry world. One of the characteristics of this spectrum is that it’s largely local in nature (good for distances of up to a few miles, at most). When mapping the current uses of the frequency it’s clear that there are large portions of the country where the spectrum is not being used. And this has prompted the FCC to ask if there is a better use of the spectrum.

Typically the FCC always finds ways to accommodate existing users and regardless of any changes made it’s unlikely that they are going to cut off use of the spectrum in nuclear plants, electric grids and water systems. But to a large degree the spectrum is being underutilized. Many of the older uses of the spectrum such as walkie-talkies and push-to-talk radios have been supplanted by newer technologies using other spectrum. With that said, there are still some places where the old radios of this type are still in use.

The FCC’s action was prompted by a joint proposal by the Enterprise Wireless Alliance (EWA) and Pacific DataVision (PDV). This petition asks for the frequency to be realigned into three 3 MHz bands that can be used for wireless broadband and two 2 MHz bands that could be used to continue to support the current narrowband uses of the spectrum. They propose that the broadband channels be auctioned to a single user in each BTA but that the narrowband uses continue to be licensed upon request in the same manner as today.

This docket is a perfect example of the complexities that the FCC always has deal with in changing the way that we use spectrum. The big question that has to always be addressed by the FCC is what to do with existing users of the spectrum. Any new allocation plan is going to cause many existing users to relocate their spectrum within the 900 MHz block or to spectrum elsewhere. And it’s generally been the practice of the FCC to make new users of spectrum pay to relocate older uses of spectrum that must be moved. And so the FCC must make a judgement call about whether it makes monetary sense to force relocation.

The FCC also has to always deal with technical issues like interference. Changing the way the spectrum will be used from numerous narrowband channels to a few wideband channels is going to change the interference patterns with other nearby spectrum. And so the FCC must make a determination of the likelihood of a spectrum change not causing more problems than it solves.

This particular band is probably one of the simpler such tasks the FCC can tackle. While the users of the spectrum perform critical tasks with the current spectrum, there is not an unmanageable number of current users and there are also large swaths of the US that have no use at all. But still, the FCC does not want to interfere with the performance at nuclear plants, petroleum refineries or electric grids.

For anybody that wants to read more about how the FCC looks at spectrum, here is the FCC Docket 17-200. The first thing you will immediately notice is that this document, like most FCC documents dealing with wireless spectrum, is probably amongst the most jargon-heavy documents produced by the FCC. But when talking about spectrum the jargon is useful because the needed discussions must be precise. But it is a good primer on the complications involved in changing the way we use spectrum. There has been a recent clamor from the Congress to free up more spectrum for cellular broadband, but this docket is a good example of how complex of an undertaking that can be.