Why Fiber?

As much as I’ve written about broadband and broadband technology, it struck me that I have never written a concise response to the question, “Why Fiber?”. Somebody asked me the question recently and I immediately knew I had never answered the question. If you’re going to build broadband and have a choice of technologies, why is fiber the best choice?

Future-proofed. This is a word that gets tossed around the broadband industry all of the time, to the point that most people don’t stop to think about what it means. The demand for broadband has been growing at a blistering pace. At the end of the third quarter of 2020, the average US home used 384 gigabytes of data per month. That’s up from 218 gigabytes per household per month just two years earlier. That is a mind-boggling large amount of data and most people have a hard time grasping the implications of fast growth over long periods of time. Even fiber network engineers often underestimate future demand because the growth feels unrealistic.

As a useful exercise, I invite readers to plot out that growth at a 21% pace per year – the rate that broadband has been growing since the early 1980s. The amount of bandwidth that we’re likely to use ten, twenty, and fifty years from now will dwarf today’s usage.

Fiber is the only technology that can handle the broadband demand today and for the next fifty years. You can already buy next-generation PON equipment that can deliver a symmetrical 10 Gbps data stream to a home or business. The next generation already under beta test will deliver a symmetrical 40 Gbps. The next generation after that is likely to be 80 Gbps or 100 Gbps. The only close competitor to fiber is a cable company coaxial network, and the only way to future proof those networks would be to ditch the bandwidth used for TV, which is the majority of the bandwidth on a cable network. Even if cable companies are willing to ditch TV, the copper coaxial networks are already approaching the end of economic life. While there has been talk of gigabit wireless to residents (which I’ll believe when I see it), nobody has ever talked about 10 gigabit wireless.

Fiber Has Solved the Upload Problem. Anybody working or schooling from home now needs fast and reliable upload broadband. Fiber is the only technology that solves the upload needs today. Wireless can be set to have faster uploads but doing so sacrifices download speed. Cable networks will only be able to offer symmetrical broadband with an expensive upgrade with technology that won’t be available for at least three years. The industry consensus is that cable companies will be loathe to upgrade unless forced to by competition.

Is the Easiest to Operate. Fiber networks are the easiest to operate since they transmit light instead of radio waves. Cable company and telco copper networks act like giant antennas that pick up interference. Interference from other wireless providers or from natural phenomenon is the predominant challenge of wireless technologies.

A fiber network means fewer trouble calls, fewer truck rolls, and lower labor costs. It’s far faster to troubleshoot problems in fiber networks. Fiber cables are also surprisingly strong, and fiber is often the only wire still functioning after a hurricane or ice storm.

Lower Life Cycle Costs. Fiber is clearly expensive to build, but the cost characteristics over a fifty-year time frame can make fiber the lowest-cost long-term option. Nobody knows how long fiber will last, but fiber manufactured today is far superior to fiber built a few decades ago. When fiber is installed carefully and treated well it might well last for most of a century. Fiber electronics are likely to have to be upgraded every 10-12 years, but manufacturers are attuned to technology upgrades that allow older customer devices to remain even after an upgrade. When considering replacement costs and ongoing maintenance expenses, fiber might be the lowest-cost technology over long time frames.

Powering the Future

For years there have been predictions that the world would be filled with small sensors that would revolutionize the way we live. Five years ago, there were numerous predictions that we’d be living in a cloud of sensors. The limitation on realizing that vision has been figuring out how to power sensors and the other electronics. Traditional batteries are too expensive and have a limited life. As you might expect, scientists from around the world have been working on better power technologies.

Self-Charging Batteries. The California company NDB has developed a self-charging battery that could remain viable for up to 28,000 years. Each battery contains a small piece of recycled radioactive carbon-14 that comes from recycled nuclear fuel rods. As the isotope decays, the battery uses a heat sink of lab-created carbon-12 diamond which captures the energetic particles of decay while acting as a tough physical barrier to contain the radiation.

The battery consists of multiple layers of radioactive material and diamond and can be fashioned into any standard batter size like a AAA. The overall radiation level of the battery is low – at less than the natural radiation emitted by the human body. Each battery is effectively a small power generator in the shape of a traditional battery that never needs to be recharged. One of the most promising aspects of the technology is that nuclear power plants pay NDB to take the radioactive material.

Printed Flexible Batteries. Scientists at the University of California San Diego have been researching batteries that use silver-oxide zinc chemistry. They’ve been able to create a flexible device that offers 10-times the energy density of lithium-ion batteries. The flexible material means that batteries can be shaped to fit devices instead of devices designed to fit batteries.

Silver–zinc batteries have been around for many years, and the breakthrough is that the scientists found a way to screen print the battery material, meaning a battery can be placed onto almost any surface. The printing process paints in a vacuum and layers on the current collectors, zinc anode, the cathode, and separator layers to create a polymer film that is stable up to almost 400 degrees Fahrenheit. The net result is a battery with ten times the power output of a lithium-ion battery of the same size.

Anti-Lasers. Science teams from around the world have been working to create anti-lasers. A laser operates by beaming protons while an anti-laser sucks up photons from the environment. An anti-laser can be used in a laptop or cellphone to collect photons and use them to power the battery in the device.

The scientific name for the method being used is coherent perfect absorption (CPA). In practice, this requires one device that beams out a photon light beam and devices with CPA technology to absorb the beams. In the laboratory, scientists have been able to capture as much as 99.996% of the transmitted power, making this more energy-efficient than plugging a device into electric power. There are numerous possible uses for the technology, starting with the obvious ability to charge devices that aren’t plugged into electricity. But the CPA devices have other possible uses. For example, the devices are extremely sensitive to changes in photons in a room and could act as highly accurate motion sensors.

Battery-Free Sensors. In the most creative solution I’ve read about, MIT scientists started a new firm, Everactive, and have developed sensors that don’t require a battery or external power source. The key to the Everactive technology is the use of ultra-low power integrated circuits which are able to harvest energy from sources like low-light sources, background vibrations, or small temperature differentials.

Everactive is already deploying sensors in applications where it’s hard to change sensors, such as inside steam-generating equipment. The company also makes sensors that monitor rotating machinery and that are powered by the vibrations coming from the machinery. Everactive says its technology has a much lower lifetime cost than traditionally powered sensors when considering the equipment downtime and cost required to periodically replace batteries.

Building Rural Coaxial Networks

Charter won $1.22 billion in the RDOF grant auction and promised on the short-form to build gigabit broadband. Charter won grant areas in 24 states, including being the largest winner in my state of North Carolina. I’ve had several people ask me if it’s possible to build rural coaxial networks, and the answer is yes, but with some caveats.

Charter and other cable companies use hybrid fiber-coaxial (HFC) technology to deliver service to customers. This technology builds fiber to neighborhood nodes and then delivers services from the nodes using coaxial copper cables. HFC networks follows a standard called DOCSIS (Data Over Cable Interface Specification) that was created by CableLabs. Charter currently uses the latest standard of DOCSIS 3.1 that easily allows for the delivery of gigabit download speeds, but something far slower for upload.

There are several distance limitations of an HFC network that come into play when deploying the technology in rural areas. First, there is a limitation of roughly 30 miles between the network core and a neighborhood node. The network core in an HFC system is called a CMTS (cable modem terminating system). In urban markets, a cable company will usually have only one core, and there are not many urban markets where 30 miles is a limiting factor. But 30 miles becomes a limitation if Charter wants to serve the new rural areas from an existing CMTS hub that would normally be located in larger towns or county seats. In glancing through the rural locations that Charter won, I see places that are likely going to force Charter to establish a new rural hub and CMTS. There is new technology available that allows a small CMTS to be migrated to the field, and so perhaps Charter is looking at this technology. It’s not a technology that I’ve seen used in the US, and the leading manufacturers of small CMTs technology are the Chinese electronics companies that are banned from selling in the US. If Charter is going to reach rural neighborhoods, in many cases they’ll have to deploy a rural CMTS in some manner.

The more important distance limitation is in the last mile of the coaxial network. Transmissions over an HFC network can travel about 2.5 miles without needed an amplifier. 2.5 miles isn’t very far, and amplifiers are routinely deployed to boost the signals in urban HFC networks. Engineers tell me that the maximum number of amplifiers that can be deployed is 5, and beyond that number, the broadband signal strength quickly dies. This limitation means that the longest run of coaxial cable to reach homes is about 12.5 miles. That’s 12.5 miles of cable, not 12.5 miles as the crow flies.

To stay within the 12.5-mile limit, Charter will have to deploy a lot of fiber and create rural nodes that might serve only a few homes. This was the same dilemma faces by the big telcos when they were supposed to upgrade DSL with CAF II money – the telcos needed to build fiber deep into rural areas to make it work. The telcos punted on the idea, and we now know that a lot of the CAF II upgrades were never made.

Charter faces another interesting dilemma in building a HFC network. The price of copper has steady grown over the last few decades and copper now costs four times more than in 2000. This means that the cost of buying coaxial cable in relatively expensive (a phenomenon that anybody building a new house knows when they hear the price of new electrical wires). It might make sense in a rural area to build more fiber to reduce the miles of coaxial cable.

Building rural HFC makes for an interesting design. There were a number of rural cable systems built sixty years ago at the start of the cable industry, because these were the areas in places like Appalachia that had no over-the-air TV reception. But these early networks carried only a few channels of TV, meaning that the distance limitations were a lot less critical. But there have been few rural cable networks built in more recent times. Most cable companies have a metric where they won’t build coaxial cable plant anywhere with fewer than 20 homes per road mile. The RDOF grant areas are far below that metric, and one has to suppose that Charter thinks that the grants make the math work.

To answer the original question – it is possible to build rural coaxial networks that can deliver gigabit download speeds. But it’s also possible to take some shortcuts and overextend the amplifier budget and curtail the amount of bandwidth that can be delivered. I guess we’ll have to wait a few years to see what Charter and others will do with the RDOF funding.

 

Explaining Open RAN

If you read more than an article or two about 5G and cellular technology you’re likely to run across the term Open RAN. You’re likely to get a sense that this is a good thing, but unless you understand cellular networks the term probably means little else. Open RAN is a movement within the cellular industry to design cellular networks using generic equipment modules so that networks can be divorced from proprietary technologies and can be controlled by software. This is akin to what has happened in big data centers where software now controls generic servers.

The first step in creating Open RAN has been to break the network down into specific functions to allow for the development of generic hardware. Today’s cellular networks have two major components – the core network and the radio access network (RAN). The easiest analog of the core network is to think of it the same as a tandem switching center. Cellular carriers have regional hubs where a set of electronics and switching process the traffic from large numbers of cell sites. The RAN is all of the cell sites where the cellular company maintains a tower and radios to communicate with customers.

Open RAN has broken the cell network into three generic modules. The radio unit (RU) is located near to or is incorporated into the antenna and is the electronics that transmits and receives signals from customers. The distributed unit (DU) is the brains at cell sites. The centralized unit (CU) is a more generic set of core hardware that communicates between the core and the distributed units.

The next step in developing Open RAN has been to ‘open’ the protocols and interfaces between the components of the cellular network. The industry has created the O-RAN Alliance that has developed open-source software that controls all aspects of the cellular network. The software has been developed in eleven generic modules that handle the major functions of the cellular network, For example, there is a software model for controlling the front-haul function between the radio unit and the distributed unit, a module for the mid-haul function between the distributed unit and the centralized unit, etc.

While the industry has created generic open-source software, each large carrier will create their own flavor of the software to configure features the way they want them. Today it’s hard to tell the difference between using AT&T versus T-Mobile, but that is likely to change over time as each carrier develops its own flavor of features.

There are some huge benefits to an Open RAN network. The first is savings on hardware. It’s far less expensive to buy generic radios rather than proprietary radio systems from one of the major vendors. In data centers, we’ve seen the cost of generic switches and servers drop hardware costs by as much as 80%.

But the biggest benefit of Open RAN is the ability to control cell sites with a single software system. Today, the task of updating cell sites to a new feature is a mind-boggling task if the upgrade requires any hardware. That requires a technician to visit every cell site in a nationwide network. Even software upgrades are a challenge and often have to be done today on site since there are numerous configurations of cell sites in a network. With Open RAN, features would be fully software-driven and could all be updated at the same time.

The cellular carriers love the concept because Open RAN frees them to develop unique solutions for customers that are software-driven and not limited by proprietary hardware and software. The industry has always talked about developing specialized features for industries like agriculture or hospitals and Open RAN provides the platform to finally do that. Even better, each major hospital chain could have unique features it desires. This leads to an exciting future where customers can help design their own features rather than accept from a menu of industry features.

Interestingly, the Open RAN concept will also carry over into cellphones, where the best cellphones will have generic chips that can be updated to develop new features without having to upgrade phones every few years.

Converting to Open RAN won’t be cheap or easy because it will ultimately mean scrapping most of the electronics and software being used today at every cell site. We’re likely to first see the big carriers breaking in Open RAN by segments, such as using the solution for small cell sites before converting the big tower sites.

One cellular carrier is likely to take the lead in this movement. Dish Networks is in the process of building a nationwide cellular network from scratch and the company has fully embraced Open RAN. This will put pressure on the other carriers to catch up if Dish’s nimble network starts capturing large nationwide customers.

Technology Trends for 2021

The following are the most important current trends that will be affecting the telecom industry in 2021.

Fiber Construction Will Continue Fast and Furious in 2021. Carriers of all shapes and sizes are still building fiber. There is a bidding war going on to get the best construction crews and fiber labor rates are rising in some markets.

The Supply Chain Still has Issues. The huge demand for building new fiber already had already put stress on the supply chain at the beginning of 2020. The pandemic increased the delays as big buyers reacted to the pandemic by re-sourcing some of the supply chain outside of China. By the end of 2021, there is a historically long waiting time to buy fiber for new and smaller buyers as the biggest fiber builders have pre-ordered huge quantities of fiber cable. Going into 2021 the delays for electronics have lessened, but there will be issues with buying fiber for much of 2021. By the end of the year, this ought to return to normal. Any new fiber builder needs to plan ahead and order fiber early.

 Next-Generation PON Prices Dropping. The prices for 10- gigabit PON technologies continue to drop and is now perhaps 15% more expensive than GPON technology which supports speeds up to a symmetrical gigabit. Anybody building a new network needs to consider the next-generation technology, or at least choose equipment that will fit into a future overlay of the faster technology.

Biggest ISPs are Developing Proprietary Technology. In a trend that should worry smaller ISPs, most of the biggest ISPs are developing proprietary technology. The cable companies have always done this through CableLabs, but now companies like Comcast are striking out with their own versions of gear. Verizon is probably leading the pack and has developed proprietary technology for fiber-to-the-curb technology using millimeter wave spectrum as well as proprietary 5G equipment. The large ISPs collectively are pursuing open-source routers, switches, and FTTP electronics that each company will then control with proprietary versions of software. The danger in this trend for smaller ISPs is that a lot of routinely available technology may become hard to find or very expensive when the big ISPs are no longer participating in the market.

Fixed Wireless Gear Improving. The electronics used for rural fixed wireless is improving rapidly as vendors react to the multiple new bands of spectrum approved by the FCC over the last year. The best gear now seamlessly integrates multiple bands of spectrum, and also meets the requirements to notify other carriers when shared spectrum bands are being used.

Big Telcos Walking Away from Copper. AT&T formally announced in October 2020 that it will no longer add new DSL customers. This is likely the first step for the company to phase out copper service altogether. The company has been claiming for years that it loses money on maintaining old technology. Verizon has been even more aggressive and has been phasing out copper service at the local telephone exchange level for the last few years throughout the northeast. DSL budgets will be slashed and DSL techs let go and as bad as DSL is today, it’s going to go downhill fast from here.

Ban on Chinese Electronics. The US ban on Chinese electronics is now in full force. Not only are US carriers forbidden from buying new Chinese electronics, but Congress has approved funding to rip out and replace several billion dollars of currently deployed Chinese electronics. This ostensibly is being done for network security because of fears that Chinese equipment includes a backdoor that can be hacked, but this is also tied up in a variety of trade disputes between the US and China. I’m amazed that we can find $2 billion to replace electronics that likely pose no threat but can’t find money to properly fund broadband.

5G Still Not Here. In 2021 there is still no actual 5G technology being deployed. Instead, what is being marketed today as 5G is really 4G delivered over new bands of spectrum. We are still 3 – 5 years away from seeing any significant deployment of the new features that define 5G. This won’t stop the cellular carriers from crowing about the 5G revolution for another year. But maybe we’ve turned the corner and there will be less than the current twenty 5G ads during a single football game.

Understanding Oversubscription

It’s common to hear that oversubscription is the cause of slow broadband – but what does that mean? Oversubscription comes into play in any network when the aggregate subscribed customer demand is greater than the available bandwidth.

The easiest way to understand the concept is with an example. Consider a passive optical fiber network where up to 32 homes share the same neighborhood fiber. In the most common GPON technology, the customers on one of these neighborhood nodes (called a PON) share a total of 2.4 gigabits of download data.

If an ISP sells a 100 Mbps download connection to 20 customers on a PON, then in aggregate, those customers could use as much as 2 gigabits of data, meaning there is still unsold capacity – meaning that each customer is guaranteed the full 100 Mbps connection inside the PON. However, if an ISP sells a gigabit connection to 20 customers, then there are 20 gigabits of potential customer usage that have been pledged over the same 2.4-gigabit physical path. The ISP has sold more than 8 times more capacity to customers than is physically available, and this particular PON has an oversubscription ratio of 8.

When people first hear about oversubscription, they are often aghast – they think an ISP has done something shady and is selling people more bandwidth than can be delivered. But in reality, an oversubscription ratio recognizes how people use bandwidth. It’s highly likely in the example of selling gigabit connections that customers will always have access to their bandwidth.

ISPs understand how customers use bandwidth and they can take advantage of the real behavior of customers in deciding oversubscription ratios. In this example, it’s highly unlikely that any residential customer ever uses a full gigabit of bandwidth – because there is almost no place on the web that where a residential customer can connect at that speed.

But more importantly, a home subscribing to a gigabit connection mostly doesn’t use most of the bandwidth they’ve purchased. A home isn’t using much bandwidth when people are asleep or away from home. The residents of a gigabit home might spend the evening watching a few simultaneous videos and barely use any bandwidth. The ISP is banking on the normal behavior of its customers in determining a safe oversubscription ratio. ISPs have come to learn that households buying gigabit connections often don’t use any more bandwidth than homes buying 100 Mbps connections – they just complete web transactions faster.

Even should bandwidth in this example PON ever get too busy, the issue is likely temporary. For example, if a few doctors lived in this neighborhood and were downloading big MRI files at the same time, the neighborhood might temporarily cross the 2.4-gigabit available bandwidth limit. Since transactions happen quickly for a gigabit customer, such an event would not likely last very long, and even when it was occurring most residents in the PON wouldn’t see a perceptible difference.

It is possible to badly oversubscribe a neighborhood. Anybody who uses a cable company for broadband can remember back a decade when broadband slowed to a crawl when homes started watching Netflix in the evening. The cable company networks were not designed for steady video streaming and were oversubscribing bandwidth by factors of 200 to one or higher. It became routine for the bandwidth demand for a neighborhood to significantly surpass network capacity, and the whole neighborhood experienced a slowdown. Since then, the cable companies have largely eliminated the problem by decreasing the number of households in a node.

As an aside, ISPs know they have to treat business neighborhoods differently. Businesses might engage in steady large bandwidth uses like connecting to multiple branches, using software platforms in the cloud, using cloud-based VoIP, etc. An oversubscription ratio that works in a residential neighborhood is likely to be far too high in some business neighborhoods.

To make the issue even more confusing, the sharing of bandwidth at the neighborhood level is only one place in a network where oversubscription comes into play. Any other place inside the ISP network where customer data is aggregated and combined will face the same oversubscription issue. The industry uses the term chokepoint to describe a place in a network where bandwidth can become a constraint. There is a minimum of three chokepoints in every ISP network, and there can be many more. Bandwidth can be choked in the neighborhood as described above, can be choked in the primary network routers that direct traffic, or can be choked on the path between the ISP and the Internet. If any chokepoint in an ISP network gets over-busy, then the ISP has oversubscribed the portion of the network feeding into the chokepoint.

Quantum Encryption

Verizon recently conducted a trial of quantum key distribution technology, which is the first generation of quantum encryption. Quantum cryptography is being developed as the next-generation encryption technique that should protect against hacking from quantum computers. Carriers like Verizon care about encryption because almost every transmission inside of our communications paths are encrypted.

The majority of encryption today uses asymmetric encryption. That means encryption techniques rely on the use of secure keys. To use an example, if you want to send encrypted instructions to your bank (such as to pay your broadband bill), your computer uses the publicly available key issued by the bank to encode the message. The bank then uses a different private key that only it has to decipher the message.

Key-based encryption is safe because it takes immense amounts of computing power to guess the details of the private key. Encryption methods today mostly fight off hacking by using long encryption keys – the latest standard is a key consisting of at least 2048 bits.

Unfortunately, the current decryption methods won’t stay safe for much longer. It seems likely that quantum computers will soon have the capability of cracking today’s encryption keys. This is possible since quantum computers can perform thousands of simultaneous calculations and could cut down the time needed to crack an encryption key from months or years down to hours. Once a quantum computer can do that, then no current encryption scheme is safe. The first targets for hackers with quantum computers will probably be big corporations and government agencies, but it probably won’t take long to turn the technology to hack into bank accounts.

Today’s quantum computers are not yet capable of cracking today’s encryption keys, but computing experts say that it’s just a matter of time. This is what is prompting Verizon and other large ISPs to look for a form of encryption that can withstand hacks from quantum computers.

Quantum key distribution (QKD) uses a method of encryption that might be unhackable. Photons are sent one at a time through a fiber optic transmission to accompany an encrypted message. If anybody attempts to intercept or listen to the encrypted stream the polarization of the photons is impacted and the recipient of the encrypted message instantly knows the transmission is no longer safe. The theory is that this will stop hackers before they know enough to crack into and analyze a data stream.

The Verizon trial added a second layer of security using a quantum random number generator. This technique generates random numbers and constantly updates the decryption keys in a way that can’t be predicted.

Verizon and others have shown that these encryption techniques can be performed over existing fiber optics lines without modifying the fiber technology. There was a worry in early trials of the technology that new types of fiber transmission gear would be needed for the process.

For now, the technology required for quantum encryption is expensive, but as the price of quantum computer chips drops, this encryption technique ought to become affordable and be available to anybody that wants to encrypt a transmission.

Network Outages Go Global

On August 30, CenturyLink experienced a major network outage that lasted for over five hours and which disrupted CenturyLink customers nationwide as well as many other networks. What was unique about the outage was the scope of the disruptions as the outage affected video streaming services, game platforms, and even webcasts of European soccer.

This is an example of how telecom network outages have expanded in size and scope and can now be global in scale. This is a development that I find disturbing because it means that our telecom networks are growing more vulnerable over time.

The story of what happened that day is fascinating and I’m including two links for those who want to peek into how the outages were viewed by outsiders who are engaged in monitoring Internet traffic flow. First is this report from a Cloudflare blog that was written on the day of the outage. Cloudflare is a company that specializes in protecting large businesses and networks from attacks and outages. The blog describes how Cloudflare dealt with the outage by rerouting traffic away from the CenturyLink network. This story alone is a great example of modern network protections that have been put into place to deal with major Internet traffic disruptions.

The second report comes from ThousandEyes, which is now owned by Cisco. The company is similar to Cloudflare and helps clients deal with security issues and network disruptions. The ThousandEye report comes from the day after the outage and discusses the likely reasons for the outage. Again, this is an interesting story for those who don’t know much about the operations of the large fiber networks that constitute the Internet. ThousandEyes confirms the suspicions that were expressed the day before by Cloudflare that the issue was caused by a powerful network command issued by CenturyLink using Flowspec that resulted in a logic loop that turned off and restarted BGP (Border Gateway Protocol) over and over again.

It’s reassuring to know that there are companies like Cloudflare and ThousandEye that can stop network outages from permeating into other networks. But what is also clear from the reporting of the event is that a single incident or bad command can take out huge portions of the Internet.

That is something worth examining from a policy perspective. It’s easy to understand how this happens at companies like CenturyLink. The company has acquired numerous networks over the years from the old Qwest network up to the Level 3 networks and has integrated them all into a giant platform. The idea that the company owns a large global network is touted to business customers as a huge positive – but is it?

Network owners like CenturyLink have consolidated and concentrated the control of the network to a few key network hubs controlled by a relatively small staff of network engineers. ThousandEyes says that the CenturyLink Network Operation Center in Denver is one of the best in existence, and I’m sure they are right. But that network center controls a huge piece of the country’s Internet backbone.

I can’t find where CenturyLink ever gave the exact reason why the company issued a faulty Flowspec command. It may have been used to try to tamp down a problem at one customer or have been part of more routine network upgrades implemented early on a Sunday morning when the Internet is at its quietest. From a policy perspective, it doesn’t matter – what matters is that a single faulty command could take down such a large part of the Internet.

This should cause concerns for several reasons. First, if one unintentional faulty command can cause this much damage, then the network is susceptible to this being done deliberately. I’m sure that the network engineers running the Internet will say that’s not likely to happen, but they also would have expected this particular outage to have been stopped much sooner and easier.

I think the biggest concern is that the big network owners have adopted the idea of centralization to such an extent that outages like this one are more and more likely. Centralization of big networks means that outages can now reach globally and not just locally like happened just a decade ago. Our desire to be as efficient as possible through centralization has increased the risk to the Internet, not decreased it.

A good analogy for understanding the risk in our Internet networks comes by looking at the nationwide electric grid. It used to be routine to purposefully allow neighboring grids to automatically interact until it because obvious after some giant rolling blackouts that we needed firewalls between grids. The electric industry reworked the way that grids interact, and the big rolling regional outages disappeared. It’s time to have that same discussion about the Internet infrastructure. Right now, the security of the Internet is in the hands of few corporations that stress the bottom line first, and which have willingly accepted increased risk to our Internet backbones as a price to pay for cost efficiency.

Network Function Virtualization

Comcast recently did a trial of DOCSIS 4.0 at a home in Jacksonville, Florida, and was able to combine various new techniques and technologies to achieve a symmetrical 1.25 Gbps connection. Comcast says this was achieved using DOCSIS 4.0 technology coupled with network function virtualization (NFV), and distributed access architecture (DAA). Today I’m going to talk about the NFV concept.

The simplest way to explain network function virtualization is that it brings the lessons learned in creating efficient data centers to the edge of the network. Consider a typical data center application that is to provide computing to a large business customer. Before the conversion to the cloud, the large business network likely contained a host of different devices such as firewalls, routers, load balancers, VPN servers, and WAN accelerators. In a fully realized cloud application, all of these devices would be replaced with software that would mimic the functions of each device, all operated remotely in a data center consisting of banks of super-fast computer chips.

There are big benefits from a conversion to the cloud. Each of the various devices used in the business IT environment  is expensive and proprietary. The host of expensive devices, likely from different vendors are replaced with lower-cost generic servers that run on fast chips. A host of expensive electronics sitting at each large business is replaced by much cheaper servers sitting in a data center in the cloud.

There is also a big efficiency gain from the conversion because inevitably the existing devices in the historic network operated with different software systems that were never 100% compatible. Everything was cobbled together and made to work, but the average IT department at a large corporation never fully understood everything going on inside the network. There were always unexplained glitches when software systems of different devices interacted in the work network.

In this trial, Comcast used this same concept in the cable TV broadband network. Network function virtualization was used to replace the various electronic devices in the Comcast traditional network including the CMTS (cable modem termination system), various network routers, transport electronics for sending a broadband signal to neighborhood nodes, and likely the whole way down to the settop box. All of these electronic components were virtualized and performed in the data center or nearer to the edge in devices using the same generic chips that are used in the data center.

There are some major repercussions for the industry if the future is network function virtualization. First, all of the historic telecom vendors in the industry disappear. Comcast would operate a big data center composed of generic servers, as is done today in other data centers all over the country. Gone would be different brands of servers, transport electronics, and CMTS servers – all replaced by sophisticated software that will mimic the performance of each function performed by the former network gear. The current electronics vendors are replaced by one software vendor and cheap generic servers that can be custom built by Comcast without the need for an external vendor.

This also means a drastically reduced need for electronics technicians at Comcast, replaced by a handful of folks operating the data center. We’ve seen this same transition roll through the IT world as IT staffs have been downsized due to the conversion to the cloud. There is no longer a need for technicians that understand proprietary hardware such as Cisco servers, because those devices no longer exist in the virtualized network.

NFV should mean that a cable company becomes more nimble in that it can introduce a new feature for a settop box or a new efficiency into data traffic routing instantly by upgrading the software system that now operates the cable network.

But there are also two downsides for a cable company. First, conversion to a cloud-based network means an expensive rip and replacement of every electronics component in the network. There is no slow migration into DOCSIS 4.0 if it means a drastic redo of the underlying way the network functions.

There is also the new danger that comes from reliance on one set of software to do everything in the network. Inevitably there are going to be software problems that arise – and a software glitch in an NFV network could mean a crash of the entire Comcast network everywhere. That may sound extreme, and companies operating in the cloud will work hard to minimize such risks – but we’ve already seen a foreshadowing of what this might look like in recent years. The big fiber providers have centralized network functions across their national fiber networks, and we’ve seen network outages in recent years that have knocked out broadband networks in half of the US. When a cloud-based network crashes, it’s likely to crash dramatically.

Breakthroughs in Laser Research

Since the fiber industry relies on laser technology, I periodically look to see the latest breakthroughs and news in the field of laser research.

Beaming Lasers Through Tubes. Luc Thévenaz and a team from the Fiber Optics Group at the École Polytechnique Fédérale de Lausanne in Switzerland have developed a technology that amplifies light through hollow-tube fiber cables.

Today’s fiber has a core of solid glass. As light moves through the glass, the light signal naturally loses intensity due to impurities in the glass, losses at splice points, and light that bounces astray. Eventually, the light signal must be amplified and renewed if the signal is to be beamed for great distances.

Thévenaz and his team reasoned that the light signal would travel further if it could pass through a medium with less resistance than glass. They created hollow fiber glass tubes with the center filled with air. They found that there was less attenuation and resistance as the light traveled through the air tube and that they could beam signals for a much greater distance before needing to amplify the signal. However, at normal air pressure, they found that it was challenging to intercept and amplify the light signal.

They finally struck on the idea of adding pressure to the air in the tube. They found that as air is compressed in the tiny tubes that the air molecules form into regularly spaced clusters, and the compressed air acts to strengthen the light signal, similar to the manner that sound waves propagate through the air. The results were astounding, and they found that they could amplify the light signal as much as 100,000 times. Best of all, this can be done at room temperatures. It works for all frequencies of light from infrared to ultraviolet and it seems to work with any gas.

The implications for the breakthrough is that light signals will be able to be sent for great distances without amplification. The challenge will be to find ways to pressurize the fiber cable (something that we used to do fifty years ago with air-filled copper cable). The original paper is available for purchase in nature photonics.

Bending the Laws of Refraction. Ayman Abouraddy, a professor in the College of Optics and Photonics at the University of Central Florida, along with a team has developed a new kind of laser that doesn’t obey the understood principles of how light refracts and travels through different substances.

Light normally slows down when it travels through denser materials. This is something we all instinctively understand, and it can be seen by putting a spoon into a glass of water. To the eye, it looks like the spoon bends at that point where the water and air meet. This phenomenon is described by Snell’s Law, and if you took physics you probably recall calculating the angles of incidence and refraction predicted by the law.

The new lasers don’t follow Snell’s law. Light is arranged into what the researchers call spacetime wave packets. The packets can be arranged in such a way that they don’t slow down or speed up as they pass through materials of different density. That means that the light signals taking different paths can be timed to arrive at the destination at the same time.

The scientists created the light packets using a device known as a spatial  light modulator which arranges the energy of a pulse of light in a way that the normal properties if space and time are no longer separate. I’m sure like me that you have no idea what that means.

This creates a mind-boggling result in that light can pass through different mediums and yet act as if there is no resistance. The packets still follow another age-old rule in Fermat’s Principle that says that light always travels to take the shortest path. The findings are lading scientists to look at light in a new way and develop new concepts for the best way to transmit light beams. The scientists say this feels as if the old restrictions of physics have been lifted and has given them a host of new avenues of light and laser research.

 The research was funded by the U.S. Office of Naval Research. One of the most immediate uses of the technology would be the ability to communicate simultaneously from planes or satellites with submarines in different locations.  The research paper is also available from nature photonics.