Facebook’s Gigabit WiFi Experiment

Facebook and the city of San Jose, California have been trying for several years to launch a gigabit wireless WiFi network in the downtown area of the city. Branded as Terragraph, the Facebook technology is a deployment of 60 GHz WiFi hotspots that promises data speeds as fast as a gigabit. This delays in the project are a good example of the challenges of launching a new technology and is a warning to anybody working on the cutting edge.

The network was first slated to launch by the end of 2016, but is now over a year late. The City or Facebook won’t commit on when the network will be launched, and they are also no longer making any guarantees of the speeds that will be achieved.

This delayed launch highlights many of the problems faced by a first-generation technology. Facebook first tested an early version of the technology on their Menlo Park campus, but has been having problems making it work in a real-life deployment. The deployment on light and traffic poles has gone much slower than anticipated, and Facebook is having to spend time after each deployment to make sure that traffic lights still work properly.

There are also business factors affecting the launch. Facebook has had turnover on the Terragraph team. The company has also gotten into a dispute over payments with an installation vendor. It’s not unusual to have business-related delays on a first-generation technology launch since the development team is generally tiny and subject to disruption and the distribution and vendor chains are usually not solidified. There is also some disagreement between the City and Facebook on who pays for the core electronics supporting the network.

Facebook had touted that the network would be significantly less expensive than deploying fiber. But the 60 GHz spectrum gets absorbed by oxygen and water vapor, so Facebook is having to deploy transmitters no more than 820 feet apart – a dense network deployment. Without fiber feeding each transmitter the backhaul is being done using wireless spectrum, which is likely to be contributing to the complication of the deployment as well as the lower expected data speeds.

For now, this deployment is in the downtown area and involves 250 pole-mounted nodes to serve a heavy-traffic business district which also sees numerous tourists. The City hopes to eventually find a way to deploy the technology citywide since 12% of the households in the City don’t currently have broadband access – mostly attributed to affordability. The City was hoping to get Google Fiber, but Google canceled plans last year to build in the City.

Facebook says they are still hopeful that they can make the technology work as planned, but that there is still more testing and research needed. At this point there is no specific planned launch date.

This experiment reminds me of other first-generation technology trials in the past. I recall several cities including Manassas, Virginia that deployed broadband over powerline. The technology never delivered speeds much greater than a few Mbps and never was commercially viable. I had several clients that nearly went bankrupt when trying to deploy point-to-point broadband using the LMDS spectrum. And I remember a number of failed trials to deploy citywide municipal WiFi, such as a disastrous trial in Philadelphia, and trials that fizzled in places like Annapolis, Maryland.

I’ve always cautioned my smaller clients to never be guinea pigs for a first-generation technology deployment. I can’t recall a time when a first-generation deployment did not come with scads of problems. I’ve seen clients suffer through first-generation deployments of all of the technologies that are now common – PON fiber, voice softswitches, IPTV, you name it. Vendors are always in a hurry to get a new technology to market and the first few ISPs that deploy a new technology have to suffer through all of the problems that crop up between a laboratory and a real-life deployment. The real victims of a first-generation deployment are often the customers using the network.

The San Jose trial won’t have all of the issues as are experienced by commercial ISPs since the service will be free to the public. But the City is not immune from the public spurning the technology if it doesn’t work as promised.

The problems experienced by this launch also provide a cautionary tale for the many 5G technology launches promised in 2018 and 2019. Every new launch is going to experience significant problems which is to be expected when a wireless technology bumps up against the myriad of issues experienced in a real-life deployment. If we have learned anything from the past, we can expect a few of the new launches to fizzle and die while a few of the new technologies and vendors will plow through the problems until the technology works as promised. But we’ve also learned that it’s not going to go smoothly and customers connected to an early 5G network can expect problems.

What’s New With Fiber Optics?

The companies that operate the long-haul fiber networks say that we are in danger of running out of bandwidth capacity on the major fiber routes between major Internet pops. The capacity of the current fiber optics along with the number of pairs of fiber between pops creates a finite maximum amount of bandwidth that can be transmitted – and with worldwide bandwidth usage still growing exponentially it’s not hard to foresee exhausting the capacity on key routes. We can always build new fibers, but it’s hard to build enough fibers anywhere to keep up with exponential growth.

But as expected, there are a number of new developments coming out of research that will probably let us stay ahead of the bandwidth curve. There is always a time delay between lab and manufacturer, but it’s good to know that there are breakthroughs on the way.

Frequency Combs. Engineers at San Diego’s Qualcomm Institute have developed a technique that could significantly improve the throughput on long-haul fiber routes. Today’s fiber technology works by transmitting multiple separate ‘colors’ of light operating simultaneously at different frequencies. But as more frequencies are jammed into a single fiber there is an increase in crosstalk, or interference between frequencies. This interference today limits the ‘power’ of the signal transmitted through a single fiber.

The Qualcomm engineers have developed a technique they are calling frequency combs. This technique grooms the outgoing light signal of each frequency so that the downstream interference is not random and can be predicted. And that is allowing them to then use an algorithm at the other end to detangle and interpret the scrambled data.

In a test this technique has created remarkable improvements. The engineers were able to increase the transmit power of the signal by 20-fold and then transmit the signal for 7,400 miles without the need for an optical regenerator. There is still work to be done, but this technique holds out great promise to be able to boost bandwidth on existing fibers.

Corkscrew Lasers. A team of scientists at the University of Buffalo’s School of Engineering and Applied Science have developed a new technique that can also increase the amount of bandwidth in a given fiber. They are taking advantage of a phenomenon that has been known for decades that takes advantage of the angular momentum to create what is called an optical vortex. This essentially creates the equivalent of a funnel cloud out of the light beam, which allows piling on more data onto a laser data stream.

For years it was always thought that this phenomenon would be impossible to control. But the team has been able to focus the vortex to a small enough point that can interface with existing computer components. The upside is that the vortex can transmit about ten times more data than a conventional linear laser beam, providing a boost of a full magnitude in laser power.

Air Fiber. A team at the University of Maryland has been able to create fiber-like data transmission feeds without using fiber. They are using a short powerful burst of four focused lasers to create a narrow beam they are calling a filament. The hot air expands around this filament creating a tube of low density air. This filament has a lower refractive index than the air around it and creates an effective mirrored tube – that can act just like a fiber optic filament.

The team has demonstrated in the lab that shooting four lasers to create the filament, followed by a short laser burst down the center of the filament, creates a temporary data pipe. The filament lasts only one-trillionth of a second, but the ensuing data beam lasts for several milliseconds – enough time to create a 2-way transmission path. The system would be used to create repeated filaments and this create a fiber path through the air.

For now the team has been able to make this work in the lab over a distance of a meter. Their next step is to move this to 50 meters. They think this theoretically could be used to transmit for long distances and could be used to create data paths in places where it’s too expensive to build fiber, and perhaps to transmit to objects in space.

 

 

Verizon Announces Residential 5G Roll-out

Verizon recently announced that it will be rolling out residential 5G wireless in as many as five cities in 2018, with Sacramento being the first market. Matt Ellis, Verizon’s CFO says that the company is planning on targeting 30 million homes with the new technology. The company launched fixed wireless trials in eleven cities this year. The trials delivered broadband wirelessly to antennas mounted in windows. Ellis says that the trials using millimeter wave spectrum went better than expected. He says the technology can achieve gigabit speeds over distances as great as 2,000 feet. He also says the company has had some success in delivering broadband without a true line-of-sight.

The most visible analyst covering this market is Craig Moffett of Moffett-Nathanson. He calls Verizon’s announcement ‘rather squishy’ and notes that there are no discussions about broadband speeds, products to be offered or pricing. Verizon has said that they would not deliver traditional video over these connections, but would use over-the-top video. There have been no additional product descriptions beyond that.

This announcement raises a lot of other questions. First is the technology used. As I look around at the various wireless vendors I don’t see any equipment on the market that comes close to doing what Verizon claims. Most of the vendors are talking about having beta gear in perhaps 2019, and even then, vendors are not promising affordable delivery to single family homes. For Verizon to deliver what it’s announced obviously means that they have developed equipment themselves, or quietly partnered on a proprietary basis with one of the major vendors. But there is no other ISP talking about this kind of deployment next year and so the question is if Verizon really has that big of a lead over the rest of the industry.

The other big question is delivery distance. The quoted 2,000 feet distance is hard to buy with this spectrum and that is likely the distance that has been achieved in a test in perfect conditions. What everybody wants to understand is the realistic distance to be used in deployments in normal residential neighborhoods with the trees and many other impediments.

Perhaps the most perplexing question is how much this is going to cost and how Verizon is going to pay for it. The company recently told investors that it does not see capital expenditures increasing in the next few years and may even see a slight decline. That does not jive with what sounds like a major and costly customer expansion.

Verizon said they chose Sacramento because the City has shown a willingness to make light and utility poles available for the technology. But how many other cities are going to be this willing (assuming that Sacramento really will allow this)? It’s going to require a lot of pole attachments to cover 30 million homes.

But even in Sacramento one has to wonder where Verizon is going to get the fiber needed to support this kind of network? It seems unlikely that the three incumbent providers – Comcast, Frontier and Consolidated Communications – are going to supply fiber to assist Verizon to compete with them. Since Sacramento is not in the Verizon service footprint the company would have to go through the time-consuming process needed to build fiber on their own – a process that the whole industry is claiming is causing major delays in fiber deployment. One only has to look at the issues encountered recently by Google Fiber to see how badly incumbent providers can muck up the pole attachment process.

One possibility comes to mind, and perhaps Verizon is only going to deploy the technology in the neighborhoods where it already has fiber-fed cellular towers. That would be a cherry-picking strategy that is similar to the way that AT&T is deploying fiber-to-the-premise. AT&T seems to only be building where they already have a fiber network nearby that can make a build affordable. While Verizon has a lot of cell sites, it’s hard to envision that a cherry-picking strategy would gain access to 30 million homes. Cherry-picking like this would also make for difficult marketing since the network would be deployed in small non-contiguous pockets.

So perhaps what we will see in 2018 is a modest expansion of this year’s trials rather than a rapid expansion of Verizon’s wireless technology. But I’m only guessing, as is everybody else other than Verizon.

Consolidation of Telecom Vendors

It looks like we might be entering a new round of consolidation of telecom vendors. Within the last year there have been the following announced consolidation among vendors:

  • Cisco is paying $5.5 billion for Broadsoft, a market leader in cloud services and software for applications like call centers.
  • ADTRAN purchased CommScope, a maker of EPON fiber equipment that is also DOCSIS compliant to work with cable networks.
  • Broadcom is paying $5.9 billion to buy Brocade Communications, a market leader in data storage devices as well as a range of telecom equipment.
  • Arris is buying Ruckus Wireless as part of a spinoff from the Brocade acquisition. Arris has a goal to be the provider of wireless equipment for the large cable TV companies.

While none of these acquisitions will cause any immediate impact on small ISPs, I’ve been seeing analysts predict that there is a lot of consolidation coming in the telecom vendor space. I think most of my clients were impacted to some degree by the last wave of vendor consolidation back around 2000. And that wave of consolidation impacted a lot of ISPs.

There are a number of reasons why the industry might be ripe for a round of mergers and acquisitions:

  • One important technology trend is the move by a lot of the largest ISPs, cable companies and wireless carriers to software defined networking. This means putting the brains to technology into centralized data centers which allows cheaper and simpler electronics at the edge. The advantages of SDN are huge for these big companies. For example, a wireless company could update the software in thousands of cell sites simultaneously instead having to make upgrades at each site. But SDN means less costly and complicated gear.
  • The biggest buyers of electronics are starting to make their own gear. For example, the operators of large data centers like Facebook are working together under the Open Compute Project to create cheap routers and switches for their data centers, which is tanking Cisco’s switch business. In another example, Comcast has designed its own settop box.
  • The big telcos have made it clear that they are going to be backing out of the copper business. In doing so they are going to drastically cut back on the purchase of gear used in the last mile network. This hurts the vendors that supply much of the electronics for the smaller telcos and ISPs.
  • I think we will be seeing an overall shift over the next few decades of more customers being served by cable TV and wireless networks. Spending on electronics in those markets will benefit few small ISPs.
  • There are not a lot of vendors left in the industry today, and so every merger means a little less competition. Just consider FTTH equipment. Fifteen years ago there was more than a dozen vendors working in this space, but over time that has cut in half.

There are a number of reasons why these trends could foretell future trouble for smaller ISPs, possibly within the next decade:

  • Smaller ISPs have always relied on bigger telcos to pave the way in developing new technology and electronics. But if the trend is towards SDN and towards large vendors designing their own gear then this will no longer be the case. Consider FTTP technology. If companies like Verizon and AT&T shift towards software defined networking and electronics developed through collaboration there will be less development done with non-SDN technology. One might hope that the smaller companies could ride the coattails of the big telcos in an SDN environment – but as each large telco develops their own proprietary software to control SDN networks that is likely to not be practical.
  • Small ISPS also rely on larger vendors to buy enough volume of electronics to hold down prices. But as the big companies buy fewer standard electronics the rest of us use you can expect either big price increases or, worse yet, no vendors willing to serve the smaller carrier market. It’s not hard to envision smaller ISPs reduced to competing in the grey market for used and reconditioned gear – something some of my clients already do who are operating ten-year old FTTP networks.

I don’t want to sound like to voice of gloom and I expect that somebody will step into voids created by these trends. But that’s liable to mean smaller ISPs will end up relying on foreign vendors that will not come with the same kinds of prices, reliability or service the industry is used to today.

The Future of WiFi

There are big changes coming over the next few years with WiFi. At the beginning of 2017 a study by Parks Associates showed that 71% of broadband homes now use WiFi to distribute the signal – a percentage that continues to grow. New home routers now use the 802.11ac standard, although there are still plenty of homes running the older 802.11n technology.

But there is still a lot of dissatisfaction with WiFi and many of my clients tell me that most of the complaints they get about broadband connections are due to WiFi issues. These ISPs deliver fast broadband to the home only to see WiFi degrading the customer experience. But there are big changes coming with the next generation of WiFi that ought to improve the performance of home WiFi networks. The next generation of WiFi devices will be using the 802.11ax standard and we ought to start seeing devices using the standard by early 2019.

There are several significant changes in the 802.11ax standard that will improve the customer WiFi experience. First is the use of a wider spectrum channel at 160 MHz, which is four times larger than the channels used by 802.11ac. A bigger channel means that data can be delivered faster, which will solve many of the deficiencies of current WiFi home networks. This will improve the network performance using the brute strength approach of pushing more data through a connection faster.

But probably more significant is the use in 802.11ax of 4X4 MIMO (multiple input / multiple output) antennas. These new antennas will be combined with orthogonal frequency division multiple access (ODMFA). Together these new technologies will provide for multiple and separate data streams within a WiFi network. In layman’s terms think of the new technology as operating four separate WiFi networks simultaneously. By distributing the network load to separate channels the interference on any given channel will decrease.

Reducing interference is important because that’s the cause of a lot of the woes of current WiFi networks. The WiFi standard allows for unlimited access to a signal and every device within the range of a WiFi network has an equal opportunity to grab the WiFi network. It is this open sharing that lets us connect lots of different devices easily to a WiFi network.

But the sharing has a big downside. A WiFi network shares signals by shutting down when it gets more than one request for a signal. The network pauses for a short period of time and then bursts energy to the first network it notices when it reboots. In a busy WiFi environment the network stops and starts often causing the total throughput on the network to drop significantly.

But with four separate networks running at the same time there will be far fewer stops and starts and a user on any one channel should have a far better experience than today. Further, with the ODMFA technology the data from multiple devices can coexist better, meaning that a WiFi router can better handle more than one device at the same time, further reducing the negative impacts of completing signals. The technology lets the network smoothly mix signals from different devices to avoid network stops and starts.

The 802.11ax technology ought to greatly improve the home WiFi experience. It will have bigger channels, meaning it can send and receive data to WiFi connected devices faster. And it will use the MIMO antennas to make separate connections with devices to limit signal collision.

But 802.11ax is not the last WiFi improvement we will see. Japanese scientists have made recent breakthroughs in using what is called the TeraHertz range of frequency – spectrum greater than 300 GHz per second. They’ve used the 500 GHz band to create a 34 Gbps WiFi connection. Until now work in these higher frequencies have been troublesome because the transmission distances for data transmission has been limited to extremely short distances of a few centimeters.

But the scientists have created an 8-array antenna that they think can extent the practical reach of fast WiFi to as much as 30 feet – more than enough to create blazingly fast WiFi in a room. These frequencies will not pass through barriers and would require a small transmitter in each room. But the scientists believe the transmitters and receivers can be made small enough to fit on a chip – making it possible to affordably put the chips into any device including cell phones. Don’t expect multi-gigabit WiFi for a while. But it’s good to know that scientists are working a generation or two ahead on technologies that we will eventually want.

Cable Labs Analysis of 5G

Cable Labs and Arris just released an interesting paper that is the best independent look at the potential for 5G that I’ve seen. Titled ”Can a Fixed Wireless Last 100m Connection Really Compete with a Wired Connection and Will 5G Really Enable this Opportunity?”, the paper was written to inform cable companies about the potential for 5G as a direct competitor to cable network broadband. The paper was released at the recent SCTE-ISBE forum in Denver. The paper is heavily technical and is aimed at engineers who want to understand wireless performance.

As is typical with everything I’ve seen out of Cable Labs over the years the paper is not biased and takes a fair look at the issues. It’s basically an examination of how spectrum works in the real world. This is refreshing since the vast majority of materials available about 5G are sponsored by wireless vendors or the big wireless providers that have a vested interest in that market succeeding. I’ve found many of the claims about 5G to be over-exaggerated and optimistic in terms of the speeds that can be delivered and about when 5G will be commercially deployed.

The paper explores a number of different issues. It looks at wireless performance in a number of different frequency bands from 3.5 GHz through the millimeter save spectrum. It takes a fair look at interference issues, such as how foliage from different kinds of trees affects wireless performance. It considers line-of-sight versus near line-of-sight capabilities of radios.

The conclusions from the report are nearly the same ones I have been blogging about for a while:

  • Speeds on 5G can be significant, particularly with millimeter wave radios. The radios already in use today are capable of gigabit speeds.
  • The spectrums being used suffer significant interference issues. The spectrums will be hampered when being used in wooded areas or with the trees on many residential streets.
  • Coverage is also an issue since the effective delivery distance for much of the spectrum being used is relatively short. The means that transmitters need to be relatively close to customers.
  • Backhaul is a problem. Fast speeds require fiber connectivity to transmitters or else robust wireless backhaul – which suffers from the same coverage and interference issues as the connections to homes.

The paper also takes a look at the relative cost today of deploying 5G technology at today’s costs:

  • The CAPEX for a 3.5 GHz system used for wireless drops (800-meter coverage distance) costs $3,000 for the transmitter and $300 per home. These radios would be making home connections of perhaps 100 Mbps.
  • A millimeter wave transmitter costs about $22,500 with home receivers at about $650. This would only cover about a 200-meter distance.
  • In both cases the transmitter costs would be spread over the number of customers within the relatively short coverage area.
  • These numbers don’t include backhaul costs or the cost of somehow mounting the radios on poles in neighborhoods.
  • These numbers don’t add up to compelling case for 5G wireless as strong cable competitor, particularly considering the interference and other impediments.

The conclusion of the paper is that 5G will be most successful for now in niche applications. It is likely to be used most heavily in serving multi-tenant buildings in densely populated urban areas. It can be justified as a temporary solution for a broadband customer until a carrier can bring them fiber. And of course, we already know that point-to-multipoint wireless already has a big application in rural areas where there are no broadband alternatives – but that application is not 5G.

But for now, Cable Labs is telling its cable company owners that there doesn’t seem to be a viable business case for 5G as a solution for widespread deployment to residential homes in cities and suburbs where the cable companies operate.

5G Networks and Neighborhoods

With all of the talk about the coming 5G technology revolution I thought it might be worth taking a little time to talk about what a 5G network means for the aesthetics of neighborhoods. Just what might a street getting 5G see in new construction that is not there today?

I live in Asheville, NC and our town is hilly and has a lot of trees. Trees are a major fixture in lots of towns in America, and people plant shade trees along streets and in yards even in states where there are not many trees outside of towns.

5G is being touted as a fiber replacement, capable of delivering speeds up to a gigabit to homes and businesses. This kind of 5G (which is different than 5G cellular) is going to use the millimeter wave spectrum bands. There are a few characteristics of that spectrum that defines how a 5G network must be deployed. This spectrum has extremely short wavelengths, and that means two things. First, the signal isn’t going to travel very far before the signal dissipates and grows too weak to deliver fast data. Second, these short wavelengths don’t penetrate anything. They won’t go through leaves, walls, or even through a person walking past the transmitter – so these frequencies require a true unimpeded line-of-sight connection.

These requirements are going to be problematic on the typical residential street. Go outside your own house and see if there is a perfect line-of-sight from any one pole to your home as well as to three or four of your neighbors. The required unimpeded path means there can be no tree, shrub or other impediment between the transmitter on a pole and each home getting this service. This may not be an issue in places with few trees like Phoenix, but it sure doesn’t look very feasible on my street. On my street the only way to make this work would be by imposing a severe tree trimming regime – something that I know most people in Asheville would resist. I would never buy this service if it meant butchering my old ornamental crepe myrtle. And tree trimming must then be maintained into the future to keep new growth from blocking signal paths.

Even where this can work, this is going to mean putting up some kind of small dish on each customer location in a place that has line-of-sight to the pole transmitter. This dish can’t go just anywhere on a house in the way that satellite TV dishes can often be put in places that aren’t very noticeable. While these dishes will be small, they must go where the transmitter can always see them. That’s going to create all sorts of problems if this is not the place in the home where the existing wiring comes into the home. In my home the wiring comes into the basement in the back of the house while the best line-of-sight options are in the front – and that is going to mean some costly new wiring by an ISP, which might negate the cost advantage of the 5G.

The next consideration is back-haul – how to get the broadband signals into and out of the neighborhood. Ideally this would be done with fiber. But I can’t see somebody spending the money to string fiber in a town like Asheville, or in most residential neighborhoods just to support wireless. The high cost of stringing fiber is the primary impediment today for getting a newer network into cities.

One of the primary alternatives to stringing fiber is to feed neighborhood 5G nodes with point-to-point microwave radio shots. In a neighborhood like mine these won’t be any more practical that the 5G signal paths. The solution I see being used for this kind of back-haul is to erect tall poles of 100’ to 120’ to provide a signal path over the tops of trees. I don’t think many neighborhoods are going to want to see a network of tall poles built around them. And tall poles still suffer the same line-of-sight issues. They still have to somehow beam the signal down to the 5G transmitters – and that means a lot more tree trimming.

All of this sounds dreadful enough, but to top it off the network I’ve described would be needed for a single wireless provider. If more than one company wants to provide wireless broadband then the number of devices multiply accordingly. The whole promise of 5G is that it will allow for multiple new competitors, and that implies a town filled with multiple wireless devices on poles.

And with all of these physical deployment issues there is still the cost issue. I haven’t seen any numbers for the cost of the needed neighborhood transmitters that makes a compelling business case for 5G.

I’m the first one to say that I’ll never declare that something can’t work because over time engineers might find solutions for some of these issues. But where the technology sits today this technology is not going to work on the typical residential street that is full of shade trees and relatively short poles. And that means that much of the talk about gigabit 5G is hype – nobody is going to be building a 5G network in my neighborhood, for the same sorts of reasons they aren’t building fiber here.

New Technology – October 2017

I’ve run across some amazing new technologies that hopefully will make it to market someday.

Molecular Data Storage. A team of scientists at the University of Manchester recently made a breakthrough with a technology that allows high volumes of data to be stored within individual molecules. They’ve shown the ability to create high-density storage that could save 25,000 gigabits of data on something the size of a quarter.

They achieved the breakthrough using molecules that contain the element dysprosium (that’s going to send you back to the periodic table) cooled to a temperature of -213 centigrade. At that temperature the molecules retain magnetic alignment. Previously this has taken molecules cooled to a temperature of -259 C. The group’s goal is to find a way to do this at -196 C, the temperature of affordable liquid nitrogen, which would make this a viable commercial technology.

The most promising use of this kind of dense storage would be in large data centers since this storage is 100 times more dense than existing technologies. This would make data centers far more energy efficient while also speeding up computing. This kind of improvement since there are predictions that within 25 years data centers will be the largest user of electricity on the planet.

Bloodstream Electricity. Researchers at Fudan University in China have developed a way to generate electricity from a small device immersed in the bloodstream. The device uses stationary nanoscale carbon fibers that act like a tiny hydropower generator. They’ve named the device as ‘fiber-shaped fluidic nanogenerator” (FFNG).

Obviously there will need to be a lot of testing to make sure that the devices don’t cause problems like blood clots. But the devices hold great promise. A person could use these devices to charge a cellphone or wearable device. They could be used to power pacemakers and other medical devices. They could be inserted to power chips in farm animals that could be used to monitor and track them, or used to monitor wildlife.

Light Data Storage. Today’s theme seems to be small, and researchers at Caltech have developed a small computer chip that is capable of temporarily storing data using individual photons. This is the first team that has been able to reliably capture photons in a readable state on a tiny device. This is an important step in developing quantum computers. Traditional computers store data as either a 1 or a 0, but quantum computers store also can store data that is both a 1 and 0 simultaneously. This has shown to be possible with photons.

Quantum computing devices need to be small and operate at the nanoscale because they hold data only fleetingly until it can be processed, and nanochips can allow rapid processing. The Caltech device is small around the size of a red blood cell. The team was able to store a photon for 75 nanoseconds, and the ultimate goal is to store information for a full millisecond.

Photon Data Transmission. Researchers at the University of Ottowa have developed a technology to transmit a secure message using photons that are carrying more than one bit of information. This is a necessary step in developing data transmission using light, which would free the world from the many limitations of radio waves and spectrum.

Radio wave data transmission technologies send one bit of data at a time with each passing wavelength. Being able to send more than one bit of data with an individual proton creates the possibility of being able to send massive amounts of data through the open atmosphere. Scientists have achieved the ability to encode multiple bits with a proton in the lab, but is the first time it’s been done through the atmosphere in a real-world application.

The scientists are now working on a trial between two locations that are almost three miles apart and that will use a technology they call adaptive optics that can compensate for atmospheric turbulence.

There are numerous potential uses for the technology in our industry. This could be used to create ultrahigh-speed connections between a satellite and earth. It could be used to transmit data without fiber between locations with a clear line-of-sight. It could used as a secure method of communications with airplanes since small light beams can’t be intercepted or hacked.

The other use of the technology is to leverage the ability of photons to carry more than one bit of data to create a new kind of encryption that should be nearly impossible to break. The photon data transmission allows for the use of 4D quantum encryption to carry the keys needed to encrypt and decrypt packets, meaning that every data packet could use a different encryption scheme.

Cable Systems Aren’t All Alike

Big cable companies all over the country are upgrading their networks to DOCSIS 3.1 and announcing that they will soon have gigabit broadband available. Some networks have already been upgraded and we are seeing gigabit products and pricing springing up in various markets around the country. But this does not mean that all cable networks are going to be capable of gigabit speeds, or even that all cable networks are going to upgrade to DOCIS 3.1. As the headline of this blog says, all cable systems aren’t alike. Today’s blog looks at what that means as it applies to available broadband bandwidth.

A DOCSIS cable network is effectively a radio network that operates only inside the coaxial cable. This is why you will hear cable network capacity described using megahertz, which is a measure of the frequency of a radio transmission. Historically cable networks came in various frequency sizes such as 350 MHz, 650 MHz or 1,000 MHz.

The size of the available frequency, in megahertz, describes the capacity of the network to carry cable TV channels or broadband. Historically one analog TV channel uses about 6 MHz of frequency – meaning that a 1,000 MHz system can transmit roughly 167 channels of traditional analog TV.

Obviously cable networks carry more channels than this, which is why you’ve seen cable companies upgrade to digital system. The most commonly used digital compression scheme can squeeze six digital channels into the same frequency that carries one analog channel. There are new compression techniques that can squeeze in even more digital channels into one slot.

In a cable network each slice of available frequency can be used to either transmit either TV channels or else be used for broadband. If a cable companies wants more broadband capacity they must create room for the broadband by reducing the number of slots used for TV.

It is the overall capacity of the cable network along with the number of ‘empty’ channel slots that determine how much broadband the network can deliver to customers. A cable system needs roughly 24 empty channel slots to offer gigabit broadband download speeds. It’s a lot harder to carve out enough empty channels on smaller capacity networks. An older cable system operating at 650 MHz has significantly less capacity for broadband than a newer urban system operating at 1,000 MHZ or greater capacity.

One of the primary benefits of DOCSIS 3.1 is the ability to combine any number of empty channels into a signal broadband stream. But the task of upgrading many older networks to DOCSIS 3.1 is not just a simple issue of upgrading the electronics. If a cable company wants the faster broadband speeds they need to also upgrade the overall capacity of the network. And the upgrade from 350 MHz or 650 MHz to 1,000 MHz is often expensive.

The higher capacity network has different operating characteristics that affect the outside cable plant. For example, the placement and spacing of cable repeaters and power taps is different in a higher frequency network. In some cases the coaxial cable used in an older cable networks can’t handle the higher frequency and must be replaced. So upgrading an older cable network to get faster speeds often means making a lot of changes in the physical cable plant. To add to the cost, this kind of upgrade also usually means having to change out most or all of the cable settop boxes and cable modems – an expensive undertaking when every customer has multiple devices.

The bottom line of all of this is that it’s not necessarily cheap or easy to upgrade older or lower-capacity cable networks to provide faster broadband. It takes a lot more than upgrading the electronics to get faster speeds and often means upgrades the physical cable plant and replacement of settop boxes and cable modems. Cable operators with older networks have to do a cost/benefit analysis to see if it’s worth the upgrade cost to get faster broadband. Since most older cable systems are in rural small towns, this is one more hurdle that must be overcome to provide faster broadband in rural America.

CAF II and Wireless

Frontier Communications just announced that they are testing the use of wireless spectrum to complete the most rural portions of their CAF II build-out requirement. The company accepted $283 million per year for six years ($1.7 billion total) to upgrade broadband to 650,000 rural homes and businesses. That’s a little over $2,600 per location passed. The CAF II program requires that fund recipients increase broadband to speeds of at least 10 Mbps down and 1 Mbps up.

Frontier will be using point-to-multipoint radios where a transmitter is mounted on a tower with the broadband signal then sent to a small antenna at each customer’s location. Frontier hasn’t said what spectrum they are using, but in today’s environment it’s probably a mix of 2.4 GHz and 5 GHz WiFi spectrum and perhaps also some 3.65 GHz licensed spectrum. Frontier, along with CenturyLink and Consolidated told the FCC a year ago that they would be interested in using the spectrum in the ‘citizens’ radio band’ between 3.7 MHz and 4.2 MHz for this purpose. The FCC opened a docket looking into this spectrum in August and comments in that docket were due to the FCC last week.

I have mixed feelings about using federal dollars to launch this technology. On the plus side, if this is done right this technology can be used to deliver bandwidth up to 100 Mbps, but in a full deployment speeds can be engineered to deliver consistent 25 Mbps download speeds. But those kinds of speeds require an open line-of-sight to customers, tall towers that are relatively close to customers (within 3 – 4 miles) and towers that are fiber fed.

But when done poorly the technology delivers much slower broadband. There are WISPs using the technology to deliver speeds that don’t come close to the FCC’s 10/1 Mbps requirement. They often can’t get fiber to their towers and they will often serve customers that are much further than the ideal distance from a tower. Luckily there are many other WISPs using the technology to deliver great rural broadband.

The line-of-sight issue is a big one and this technology is a lot harder to make work in places with lots of trees and hills, making it a difficult delivery platform in Appalachia and much of the Rockies. But the technology is being used effectively in the plains and open desert parts of the country today.

I see downsides to funding this technology with federal dollars. The primary concern is that the technology is not long-lived. The electronics are not generally expected to last more than seven years and then the radios must be replaced. Frontier is using federal dollars to get this installed, and I am sure that the $2,600 per passing is enough to completely fund the deployment. But are they going to keep pouring capital into replacing radios regularly over time? If not, these deployments would be a sick joke to play on rural homes – giving them broadband for a few years until the technology degrades. It’s hard to think of a worse use of federal funds.

Plus, in many of areas where the technology is useful there are already WISPs deploying point-to-multipoint radios. It seems unfair to use federal dollars to compete against firms who have made private investments to build the identical technology. The CAF money ought to be used to provide something better.

I understand Frontier’s dilemma. In the areas where they took CAF II money they are required to serve everybody who doesn’t have broadband today. My back-of-the envelope calculations tells me that the CAF money was not enough for them to extend DSL into the most rural parts of the CAF areas since extending DSL means building fiber to feed the DSLAMs.

As I have written many times I find the whole CAF program to be largely a huge waste of federal dollars. Using up to $10 billion to expand DSL, point-to-multipoint, and in the case of AT&T cellular wireless is a poor use of our money. That same amount of money could have seeded matching broadband that could be building a lot of fiber to these same customers. We only have to look at state initiatives like the DEED grants in Minnesota to see that government grant money induces significant private investment in fiber. And as much as the FCC doesn’t want to acknowledge it, building anything less than fiber is nothing more than a Band-aid. We can and should do better.