Gluing Fiber

A lot of people asked for my opinion about a recent news article that talks about a new technology that allows gluing fiber directly to streets. The technology comes from a start-up, Traxyl and involves adhering fiber to the streets or other hard surface using a hard resin coating. The company in early trials says the coating can withstand weather, snowplows and a 50-ton excavator. The company is predicting that the coating ought to be good for ten years.

Until I see this over time in real life I don’t have any way to bless or diss the technology. But I have a long list of concerns that would have to be overcome before I’d recommend using it. I’d love to hear other pros and cons from readers.

Surface Cuts. No matter how tough the coating, the fiber will be broken when there is a surface cut in the street. Shallow surface cuts happen a lot more often than cuts to deeper fiber, and even microtrenched trenched fiber at 6 inches is a lot safer. As an example, on my residential street in the last year there have been two water main breaks and one gas leak that caused the city to cut from curb to curb across the whole road surface. I wouldn’t be shocked in a city of this size (90,000 population) if there aren’t a dozen such road cuts somewhere in the city every day. This makes me wonder of the true useful life on the fiber, because that’s a lot of outages to deal with.

I also worry about smaller road disturbances. Anything that breaks the road surface is going to break the fiber. That could be road heaving, tree roots or potholes. I’d hate to lose my fiber every time a pothole formed under it.

Repaving. Modern roads undergo a natural cycle. After initial paving roads are generally repaved every 10-15 years by laying down a new coat of material on top of the existing road. During the preparation for repaving it’s not unusual to lightly groom the road and perhaps scrape off a little bit of the surface. It seems like this process would first cut the fiber in multiple places and would then bury the fiber under a few inches of fresh macadam. I would think the fiber would have to be replaced since there would be no access to the fiber after repaving.

The repaving process is generally done 2 to 4 times during the life of a street until there’s a need for a full new repaving. In repaving the roadbed is excavated to the substrate and any needed repairs made to the substrate before a full new repaving. This process would fully remove the glued fiber from the street (as it would also remove micro-trenched fiber).

Outage time frames. The vendor says that a cut can be mended by either ungluing and fixing the existing wire or else placing a new fiber patch over the cut. That sounds like something that can be done relatively quickly. My concern comes from the nature of road cuts. It’s not usual for a road cut to be in place for several days when there is a major underground problem with gas or water utilities. That means fiber cuts might go days before they can be repaired. Worse, the process of grading and repaving a road might take the fiber out of service for weeks or longer. Customers on streets undergoing repaving might lose broadband for a long time.

Cost. The vendor recognizes many of these issues. One of their suggestions to mitigate the problems would be to lay a fiber on both sides of a street. I see two problems with that strategy. First, it doubles the cost. They estimate a cost of $15,000 per mile and this becomes less attractive at $30,000 per mile. Further, two fibers don’t fix the problem of repaving. It doesn’t even solve road cuts other than halving the number of households served by a given fiber (each fiber serves one side of the street).

I’m also concerned about lifecycle cost. Buried conduit ought to be good for a century or more, and the fiber in those conduits might need to be replaced every 50 – 60 years. Because of street repaving the gluing technology might require new fiber 5 – 7 times in a century, making this technology significantly more expensive in the long run. Adding in the cost of constantly dealing with fiber cuts (and the customer dissatisfaction that comes with outages), this doesn’t sound customer friendly or cost effective.

The article suggests dealing with the fiber cuts by using some sort of a mesh network that I guess would create multiple local rings. This sounds interesting, but there are no fiber electronics that work that way today. If fiber is laid on both sides of the street, then a cut in one fiber knocks out the people on that side of the street. I can’t envision a PON network that could be done any other way.

These are all concerns that would worry me as a network operator. We bury fiber 3-4 feet underground to avoid all of the issues that worry me about fiber at the surface. To be fair, I can think of numerous applications where this could be beneficial. This might be a great way to lay fiber inside buildings. It might make sense to connect buildings in a campus environment. It would alleviate the issues of bringing fiber through federal park land where it’s hard to get permission to dig. It could be used to bring a second path to a customer that demands redundancy. It might even be a good way to get fiber to upper floors of high-rises where the existing fiber ducts are full. But I have a hard time seeing this as a last mile residential network. I could be proven to be wrong, but for now I’m skeptical.

A Deeper Look at 5G

The consulting firm Bain & Company recently looked at the market potential for 5G. They concluded that there is an immediate business case to be made for 5G deployment. They go on to conclude that 5G ‘pessimists’ are wrong. I count myself as a 5G pessimist, but I admit that I look at 5G mostly from the perspective of the ability of 5G to bring better broadband to small towns and rural America. I agree with most of what Bain says, but I take the same facts and am still skeptical.

Bain says that the most immediate use for 5G deployment is in urban areas. They cite an interesting statistic I’ve never seen before that says that it will cost $15,000 – $20,000 to upgrade an existing cell site with 5G, but will cost between $65,000 and $100,000 to deploy a new 5G node. Until the cost for new 5G cell sites comes way down it’s going to be hard for anybody to justify deploying new 5G cell sites except in those places that have potential business to support the high investment cost.

Bain recommends that carriers should deploy 5G quickly in those places where it’s affordable in order to be the first to market with the new technology. Bain also recommends that cellular carriers take advantage of improved mobile performance, but also look at hard at the fixed 5G opportunities to deliver last mile broadband. They say that an operator that maximizes both opportunities should be able to see a fast payback.

A 5G network deployed on existing cell towers is going to create small circles of prospective residential broadband customers – and that circle isn’t going to be very big. Delivering significant broadband would mean small circles delivering broadband for 1000 to 1,500 feet from a transmitter. Cell towers today are much farther apart than those distances, and this means a 5G delivery map consisting of scattered small circles.

There are not many carriers willing to tackle that business plan. It means selectively marketing only to those households within range of a 5G cell site. AT&T is the only major ISP that already uses this business plan. AT&T currently offers fiber to any homes or businesses close to their numerous fiber nodes. They could use that same sales plan to sell fixed broadband to customers close to each 5G cell site. However, AT&T has said that, at least for now, they don’t see a business case for 5G similar to their fiber roll-out.

Verizon could do this, but they have been walking away from a lot of their residential broadband opportunities, going so far as to sell a lot of their fiber FiOS customers to Frontier. Vericaon says they will deploy 5G in several cities starting next year but has never talked about the number of potential households they might cover. This would require a major product roll-out for T-Mobile or Sprint, but in the document they filed with FCC to support their merger they said they would tackle this market. Both companies currently don’t have the fleet of needed technicians or the backoffice ready to support the fixed residential broadband market.

The report skims past the the question of the availability of 5G technology. Like any new technology the first few generations of field equipment are going to have problems. Most players in the industry have learned the lesson of not widely deploying any new technology until it’s well-proven in the field. Verizon says their early field trials have gone well and we’ll have to wait until next year to see how 5G they are ready to deploy with first generation technology.

Bain also says there should be no ‘surge’ in capital expenditures if companies deploy 5G wisely – but the reverse is also true, and bringing 5G small cells to places without current fiber is going to be capital intensive. I agree with Bain that, technology concerns aside, that the only place where 5G makes sense for the next few years is urban areas and mostly on existing cell sites.

I remain a pessimist of 5G being feasible in more rural areas. The cost of the electronics will need to drop to a fraction of today’s cost. There are going to always be pole issues for deploying smaller cells in rural America – even should regulators streamline the hanging of small cell sites, those 5G devices can’t be placed onto the short poles we often see in rural America. While small circles of broadband delivery might support an urban business model, the low density in rural America might never make economic sense.

I certainly could be wrong, but I don’t see any companies sinking huge amounts of money into 5G deployments until the technology has been field-proven and until the cost of the technology drops and stabilizes. I hope I am proven wrong and that somebody eventually finds a version of the technology that will benefit rural America – but I’m not going to believe it until I can kick the tires.

Regulating Over-the-Top Video

I know several cable head-end owners that are developing over-the-top video products to deliver over traditional cable networks. I define that to be a video product that is streamed to customers over a broadband connection and not delivered to customers through a settop box or equivalent. The industry now has plenty of examples of OTT services such as Netflix, Amazon Prime, Sling TV, Hulu and a hundred others.

While the FCC has walked almost totally away from broadband regulation there are still a lot of regulations affecting cable TV, so today I am looking at the ramifications of streaming programming to customers instead of delivering the signal in a more traditional way. Why would a company choose to stream content? The most obvious benefit is the elimination of settop boxes. OTT services only require an app on the receiving device, which can be a smart TV, desktop, laptop, tablet or cellphone. Customers largely dislike settop boxes and seem to love the ability to receive content on any device in their home. A provider that pairs OTT video delivery with a cloud DVR has replaced all of the functions of the settop box.

There are a few cable companies that have been doing this. Comcast today offers a streaming service they label as Xfinity Instant TV. This package starts with a package of ten channels including local broadcast networks. They then offer 3 add-on options: a kids and family package for $10, an entertainment package for $15 and a sports and news package for $35. Comcast also touts that a customer can choose to stream the content to any of the millions of Comcast WiFi hotspots, not only at their homes.

It’s an interesting tactic for Comcast to undertake, because they have invested huge R&D dollars into developing their own X1 settop box that is the best in the industry. The company is clearly using this product to satisfy a specific market segment which is likely those considering cutting the cord or those that want to be able to easily download to any device.

A second big benefit to Comcast is that they save a lot of money on programming by offering smaller channel line-ups. Traditional cable packages generally include a lot of channels that customers don’t watch but which still must be paid for. Comcast would much prefer to sell a customer a smaller channel line-up than to have them walk away from all Comcast programming.

The third reason why a cable provider might want to stream content is that it lets them argue that they can selectively walk away from cable regulations. The only real difference between Comcast’s OTT and their traditional cable products is the technology used to get a channel to a customer. From a regulatory perspective this looks a lot like the regulatory discussions we had for years about VoIP – does changing the technology somehow create a different product and different regulations. Before VoIP there were numerous technology changes in the way calls were delivered – open wire, party-lines, digitized voice on T-carrier, etc. – but none of the technology upgrades every changed the way that voice was regulated.

I can’t see any reason why Comcast is allowed, from a regulatory perspective, to stream their ITT content over their cable network. The company is clearly violating the rules that require the creation of specific tiers such as basic, expanded basic and premium. What seems to be happening is that regulators are deciding not to regulate. You might recall that three or four years ago the FCC opened investigation this and other video issue – for example, they wanted to explore if video delivered on the web needs to be regulated. That docket also asked about IP video being delivered over a cable system. The FCC never acted on that docket, and I chalk that up to the explosion of online video content. The public voted with their pocketbooks to support streaming video and the FCC let the topic die.  There are arguments that can be made for regulating streaming video, particularly when it’s delivered over the same physical network as traditional cable TV, like in the case with Comcast.

Clearly the FCC is not going to address the issue, and so the technology an lack of regulation ought to be made available to many other cable providers. But that doesn’t mean that the controversy will be over. I predict that the next battleground will be the taxation of streaming video. Comcast would gain a competitive advantage over competitors if they don’t have to pay franchise fees for streaming content. In fact, a cable company can argue they don’t need a franchise if they choose to stream all of their content.

It’s somewhat ironic that we are likely to have these regulatory fights with the cable product – a product that is clearly dying. Customers are demanding alternatives to traditional cable TV, yet the FCC is still saddled with the cable regulations handed to them by Congress. One nightmare scenario for Comcast and the industry would be if some competitor sues a cable company to stop the streaming product – because that would require the regulators, and ultimately the courts to address the issue. It’s not inconceivable that a court could decide that the Comcast streaming service is in violation of the FCC rules that define channel line-ups. Congress could fix this issue easily, but unless they do away with the current laws there will always be a background regulatory threat hanging over anybody that elect to use the product.

360-degree Virtual Reality

South Korea already has the fastest overall broadband speeds in the world and they are already working towards the next generation of broadband. SK Broadband provides gigabit capable fiber to over 40% of households and has plans to spend over $900 million to increase that to 80% of households by 2020.

The company also just kicked off a trial of next generation fiber technology to improve bandwidth delivery to customers. Customers today have an option of 1 Gbps service. SK Broadband just launched a trial with Nokia in an apartment building to increase end-user bandwidth. They are doing this by combining the current GPON technology with both XGSPON and NG PON2 to increase the overall bandwidth to the apartment complex from 2.5 Gbps to 52.5 Gbps. This configuration allows a customer to run three bandwidth-heavy devices simultaneously with each having access to a separate 833 Mbps symmetrical data path. This particular combination of technologies may never be widely implemented since the company is also considering upgrades to bring 10 Gbps residential service.

The big question is why SK Broadband thinks customers need this much bandwidth? One reason is gaming and over 25 million people, or a little over half the population of the country partake in online gaming today. There is not another country that is even a close second to the gaming craze there. The country also has embraced 4K video and a large percentage of programming there uses the format, which can require data streams as large as 15 Mbps for each stream.

But those applications don’t alone the kind of bandwidth that the company is considering. The architect of the SK Broadband network cites the ability to deliver 360-degree virtual reality as the reason for the increase in bandwidth. At today’s compression techniques this could require data streams as much as 6 times larger than a 4K video stream, or 90 Mbps.

What is 360-degree virtual reality and how does it differ from regular virtual reality? First, the 360-degree refers to the ability to view the virtual reality landscape in any direction. That means the user can look behind, above and below them in any direction. A lot of virtual reality already has this capability. The content is shot or created to allow viewing in any direction and the VR user can look around them. For example, a 360 virtual reality view of a skindiver would allow a user to follow an underwater object as the diver approaches, and look back to watch is as they pass by.

But the technology that SK Broadband sees coming is 360-degree immersive VR. With normal virtual reality a user can look at anything within sight range at a given time. But with normal virtual reality the viewer moves with the skindiver – it’s strictly a viewing experience to see whatever is being offered. Immersive virtual reality let’s a user define the experience – in an immersive situation the VR user can interact with the environment. They might decide to stay at a given place longer, or pick up a seashell to examine it.

SK Broadband believes that 360-degree VR will soon be a reality and they think it will be in big demand. The technology trial with Nokia is intended to support this technology by allowing up to three VR users at the same location to separately enter a virtual reality world together yet each have their on experience. Immersive VR will allow real gaming. It will let a user enter a 3D virtual world and interact in any manner they wish – much like the games played today with game machines.

This is a great example of how broadband applications are developed to fit the capacity of networks. South Korea is the most logical place to develop high-bandwidth applications since they have so many customers using gigabit connections. Once a mass of potential users is in place then developers can create big-bandwidth content. It’s a lot harder for that to happen in the US since the percentage of those with gigabit connections is still tiny. However, an application developer in South Korea can get quick traction since there is a big pool of potential users.

 

The Latest on Agricultural IoT

For years we’ve heard about how broadband was needed to support agriculture. However, for many years it was hard to find widespread use of farming technologies that need broadband. Finally, agricultural use of the Internet of Things is spreading rapidly – the research firm Alpha Brown estimates that there were over 250,000 US farmers using IoT technology at the end of 2017.

Alpha Brown says there are 1.1 million farms that could benefit from the technology, with broadband connectivity being a huge limiting factor today. Surveys show that more than half of farmers already say they are interested in deploying the technology. Berg Insight, another firm that tracks the industry says that there is the potential for as many as 27.4 million sensors being deployed by US agriculture by 2021.

Agricultural sensors mostly rely on the 802.15.4 standard, which defines the operation of low-rate wireless personal area networks (LR-WANs). Any given sensor doesn’t generate a huge amount of data, but the deployment of multitudes of sensors can require significant bandwidth.

Following are just a few of the agricultural IoT applications already being deployed.

Cattle Farmers and Ranchers. This is the most widespread use of IoT so far. There are numerous IoT applications being used:

  • Moocall is a device that monitors the delivery of calves. It’s a wireless sensor that is strapped to a pregnant cow and that provides an hour’s notice when a cow is ready to give birth.
  • eCow makes a bolus (IoT ‘pill’) that sits in a cow’s stomach for up to five months and which transmits constant readings for temperature and pH.
  • There are several vendors making sensors specific to dairy cows that measure a wide-range of biometric data including temperature, animal activity, GPS position, pulse and various biological metrics. Dairy farming has become scientific with farmers treating each cow individually to maximize milk output.

Crop Farming. There are numerous sensors not available for specific crops:

  • Bosch makes a sensor specific to asparagus farming. Asparagus yields depend upon the ground temperature and farmers use a two-sided foil (black on one side, white on the other) to add or deflect heat from the soil. The sensor measure temperature at various depth and notifies the farmer when it’s time to flip the foil.
  • Semios makes sensors that are specific to fruit orchards which measure leaf-wetness, soil moisture, pest presence, and frost monitoring that can be tailored to each specific orchard.
  • TracoVino makes similar sensors that are specifically to monitor grape vines.
  • There are numerous vendors making IoT sensors that measure characteristics of the soil, plants and environment to notify the need for irrigation.
  • There are several vendors providing systems to automate and monitor greenhouses.

Drones. Drones are being used for a number of different agricultural tasks:

  • DroneSeed provides drones for forestry management. The drones can identify trees with pest problems and then selectively spray only those trees. The drones also collect data on forest conditions – something that was never easily available in the past. They are several vendors using drones to plant new trees being used to reforest empty land and to renew mangrove swamps.
  • Water Conservation. Drones can provide real-time moisture monitoring that can allow farmers to save as much 90% of irrigation water by only watering where needed. This requires real-time collection of data tied into watering systems.
  • Chemical use. Drones are also reducing the amount of chemical being applied by monitoring plant health to direct fertilizer or insecticide only where needed.

 

Another New Broadband Technology

There is another new broadband technology on the horizon. It involves lighter-than-air blimps operating at about 65,000 feet. A recent filing at the FCC by the Elefante Group, in conjunction with Lockheed Martin describes the technology, referred to as Stratospheric-Based Communications Service (SBCS).

Interestingly, the company’s filings at the FCC seem to go out of their way to avoid calling these blimps or airships – but that’s what they are. Each floating platform would have the capacity for a terabyte in capacity, both upload and download. At a 65,000 altitude the blimps would avoid weather issues and airplanes and one platform would be able to cover a 6,000 square mile area. That sounds like a large area, but the US has just less than 3.8 million square miles. Since these are circular coverage areas, total coverage means areas will overlap and my math says it might take nearly 1,000 blimps to cover the whole country – but there is a lot of empty space in the western states so some lower number would practically cover most of the country.

The company proposes a range of products including cellular backhaul, corporate WAN, residential broadband and monitoring IoT sensors. There is no mention of the possible speeds that might be offered with home broadband and I have to wonder if that’s been thrown into the filing to gain favor at the FCC. While a terabyte of broadband sounds like a lot, it could quickly get gobbled up on a given platform delivering gigabit pipes to cell towers. Top that off with creating private corporate networks and I have to wonder what might be left over for the residential market. It’s a lot easier building a business selling to cellular carriers and large businesses than building the backoffice to support the sale and support of residential broadband.

The company plans to launch the first test blimp in 2020 and a following one in 2021. By using airships and solar power the blimps can carry more electronics than the proposed low-orbit satellite networks that several companies are planning. It’s probably far cheaper to replace the units and one can imagine plans for bringing the blimps back to earth for periodic upgrades

The project faces one major hurdle. They have tested various frequencies and found the sweet-spot for this particular application to be at 26 GHz. The company is asking the FCC to set aside that bandwidth for SBCS. That request will be challenged by terrestrial broadband providers. It’s a really interesting tug-of-war with higher frequencies since frequencies above 20 GHz have been used only sporadically in the past. However, now that this spectrum is being opened for 5G there are numerous technologies and companies vying to carve out chunks of millimeter wave spectrum.

The company probably has an uphill climb to grab such a significant swath of spectrum. I would think that using it in the fashion contemplated from the blimps would make the spectrum unavailable for other uses. The FCC has hard decisions to make when looking at spectrum use. For example, similar ventures – blimps from Google and large gliders from Facebook – have been quietly discontinued and there is no guarantee that this technology or the companies behind it will prosper. A nightmare scenario for the FCC would be a weak deployment of the technology with only a few platforms deployed over major cities – effectively ruining the spectrum for other uses.

If the technology works as Elefante promises, it would be a major competitor to rural long-haul fiber providers. This technology would allow placement of small cell sites anywhere and would compete with the expensive fiber already built to provide service to rural cell sites. The satellite technologies would give me pause if I was building a new rural fiber long-haul network – but then again, these may never materialize or work as touted.

It’s certainly possible that one or more of the new promised technologies like this or like the low-orbit satellites could provide a viable broadband alternative for rural America. But just having the technologies able to serve that market doesn’t mean these companies will chase the market hard – there is a lot more money to be made in serving cell sites and creating private corporate networks. Chasing millions of rural homes is a much more complicated business and I’ll believe it when I see somebody doing it.

Will 5G Phones Need WiFi?

Our cellular networks have become heavily reliant on customers using WiFi. According to Cisco’s latest Virtual Network Index about 60% of the data generated from cellphones is carried over WiFi and landline broadband connections. Most of us have our cellphones set to grab WiFi networks that we are comfortable with, particularly in the home and office.

The move to use WiFi for data was pushed by the cellular companies. As recently as just a few years ago they were experiencing major congestion at cell sites. This congestion was due to a combination of cell sites using older versions of 4G technology and of inadequate backhaul data pipes feeding many cell sites. The cellular carriers and manufacturers made it easy to switch back and forth between cellular and WiFi and most people quickly got adept at minimizing data usage on the cellular network.

Many people have also started using WiFi calling. This is particularly valuable to those who live or work in a building with poor indoor cellular coverage, and WiFi calling allows a phone to process voice through the WiFi connection. But this has always been a sketchy technology and WiFi calling is often susceptible to poor voice quality and unexpected call droppage due to WiFi fluctuations. WiFi calling also doesn’t roam, so anybody walking out of the range of their WiFi router automatically drops the call.

However, recently we’ve seen possibly the start of a trend of more broadband traffic staying on the cellular network. In a recent blog I cited evidence that unlimited cellular customers are using less WiFi and are instead staying on their cellular data network even when WiFi is available. Since most people use WiFi to preserve usage on their cellular data plans, as more people feel comfortable about not hitting a data caps we ought to see many people sticking more to cellular.

5G ought to make it even easier to keep traffic on the cellular network. The new standard will make it easier to make and hold a connection to a cell site due to a big increase in the number of possible simultaneous connections available at each cell site. This should finally eliminate not being able to make a cellular connection in crowded locations.

The 5G improvements are also going to increase the available bandwidth to cellphones through the use of multiple antennas and frequencies. The expectations are that cellphone download speeds will creep up with each incremental improvement in the coming 5G networks and that speeds will slowly improve over the next decade.

Unfortunately this improved performance might not make that big of a difference within buildings with poor cellular coverage today, because for the most part the frequencies used for 5G cellular will be the same ones used today. We keep reading about the coming use of millimeter waves, but the characteristics of those frequencies, such as the short distances covered are going to best fit urban areas and it’s likely to be a long while until we see these frequencies being used everywhere in the cellular networks. Even where used, those higher frequencies will have an even harder time penetrating buildings than today’s lower frequencies.

Overall, the improvements of 5G ought to mean that cellular customers ought to be able to stay more easily with cellular networks and not need WiFi to the same extent as today. A transition to less use of WiFi will be accelerated if the cellular marketing plans continue to push unlimited or large data-cap plans.

This all has big implications on network planning. Today’s cellular networks would be instantly swamped if people stopped using WiFi. The use of cellular data is also growing at a much faster pace than the use of landline data. Those two factors together portends a blazingly fast growth in the backhaul needed for cell sites. We are likely to see geometric rates of growth, making it expensive and difficult for the cellular carriers to keep up with data demand. It’s sounding to me like being a cellular network planner might be one of the hardest jobs in the industry right now.

Predicting Broadband Usage on Networks

One of the hardest jobs these days is being a network engineer who is trying to design networks to accommodate future broadband usage. We’ve known for years that the amount of data used by households has been doubling every three years – but predicting broadband usage is never that simple.

Consider the recent news from OpenSource, a company that monitors usage on wireless networks. They report a significant shift in WiFi usage by cellular customers. Over the last year AT&T and Verizon have introduced ‘unlimited’ cellular plans and T-Mobile has pushed their own unlimited plans harder in response. While the AT&T and Verizon plans are not really unlimited and have caps a little larger than 20 GB per month, the introduction of the plans has changed the mindset of numerous users who no longer automatically seek WiFi networks.

In the last year the percentage of WiFi usage on the Verizon network fell from 54% to 51%; on AT&T from 52% to 49%, and on T-Mobile from 42% to 41%. Those might not sound like major shifts, but for the Verizon network it means that the cellular network saw an unexpected additional 6% growth in data volumes in one year over what the company might normally have expected. For a network engineer trying to make sure that all parts of the network are robust enough to handle the traffic this is a huge change and means that chokepoints in the network will appear a lot sooner than expected. In this case the change to unlimited plans is something that was cooked-up by marketing folks and it’s unlikely that the network engineers knew about it any sooner than anybody else.

I’ve seen the same thing happen with fiber networks. I have a client who built one of the first fiber-to-the-home networks and use BPON, the first generation of electronics. The network was delivering broadband speeds of between 25 Mbps and 60 Mbps, with most customers in the range of 40 Mbps.

Last year the company started upgrading nodes to the newer GPON technology, which upped the potential customer speeds on the network to 1 gigabit. The company introduced both a 100 Mbps product and a gigabit product, but very few customers immediately upgraded. The upgrade meant changing the electronics at the customer location, but also involved a big boost in the size of the data pipes between neighborhood nodes and the hub.

The company was shocked to see data usage in the nodes immediately spike upward between 25% and 40%. After all, they had not arbitrarily increased customer speeds across-the-board, but had just changed the technology in the background. For the most part customers had no idea they had been upgraded – so the spike can’t be contributed to a change in customer behavior like what happened to the cellular companies after introducing unlimited data plans.

However, I suspect that MUCH of the increased speeds still came from changed customer behavior. While customers were not notified that the network had been upgraded, I’m sure that many customers noticed the change. The biggest trend we see in household broadband demand over the last two years is the desire by households to utilize multiple big data streams at the same time. Before the upgrades households were likely restricting their usage by not allowing kids to game or do other large bandwidth activities while the household was video streaming or doing work. After the upgrade they probably found they no longer had to self-monitor and restrict usage.

In addition to this likely change in customer behavior the spikes in traffic also were likely due to correcting bottlenecks in the older fiber network that the company had never recognized or understood. I know that there is a general impression in the industry that fiber networks don’t see the same kind of bottlenecks that we expect in cable networks. In the case of this network, a speed test on any given customer generally showed a connection to the hub at the speeds that customers were purchasing – and so the network engineers assumed that everything was okay. There were a few complaints from customers that their speeds bogged down in the evenings, but such calls were sporadic and not widespread.

The company decided to make the upgrade because the old electronics were no longer supported by the vendor and they also wanted to offer faster speeds to increase revenues. They were shocked to find that the old network had been choking customer usage. This change really shook the engineers at the company and they feared that the broadband growth curve was going to now be at the faster rate. Luckily, within a few months each node settled back down to the historic growth rates. However, the company found itself instantly with network usage they hadn’t expected for at least another year, making them that much closer to the next upgrade.

It’s hard for a local network owner to predict the changes they are going to affect the network utilization. For example, they can’t predict that Netflix will start pushing 4K video. They can’t know that the local schools will start giving homework that involves watching a lot of videos at home. Even though we all understand the overall growth curve for broadband usage, it doesn’t grow in a straight line and there are periods of faster and slower growth along the curve. It’s enough to cause network engineers to go gray a little sooner than expected!

Future Technology – May 2018

I’ve seen a lot of articles recently that promise big improvements in computer speeds, power consumption, data storage, etc.

Smaller Transistors. There has been an assumption that we are at the end of Moore’s Law due to reaching the limit on the smallness of transistors. The smallest commercially available transistors today are 10 nanometers in diameter. The smallest theoretical size for silicon transistors is around 7 nm since below that size the transistor can’t contain the electron flow due to a phenomenon called quantum tunneling.

However, scientists at the Department of Energy’s Lawrence Berkeley Laboratory have developed a 1 nanometer transistor gate, which is several magnitudes smaller than silicon transistors. The scientists used molybdenum disulfide, a lubricant commonly used in auto shops. Combining this material with carbon nanotubes allows electrons to be controlled at the 1 nm distance. Much work is still needed to go from lab to production, but this is the biggest breakthrough in transistor size in many years and if it works will provide a few more turns of Moore’s Law.

Better Data Storage. A team of scientists at the National University of Singapore have developed a technology that could be a leap forward in data storage technology. The breakthrough uses skyrmions which were identified in 2009. The scientists have combined cobalt and palladium into a film that is capable of housing the otherwise unstable skyrmions at room temperatures.

Once stabilized the skyrmions, at only a few manometers in size, can be used to store data. If these films can be stacked they would provide data storage with 100 times the density of current storage media. We need better storage since the amount of data we want to store is immense and expected to increase 10-fold over the next decade.

Energy Efficient Computers.  Ralph Merkle, Robert Freitas and others have created a theoretical design for a molecular computer than would be 100 billion times more energy efficient than today’s most energy efficient computers. This is done by creating a mechanical computer that creates small physical gates at the molecular level that mechanically open and close to create circuits. This structure would allow the creation of the basic components for computing such as AND, NAND, NOR, NOT, OR, XNOR and XOR gates without electronic components.

Today’s computers create heat due to the electrical resistance in components like transistors, and it’s this resistance that requires huge electricity bills to operate and then cool big data centers. Mechanical computer create less heat from the mechanical process of opening and closing logic gates, and this friction can be nearly eliminated by creating tiny gates at the molecular level.

More Powerful Supercomputers. Scientists at Rice University and the University of Illinois at Urbana-Champaign have developed a process that significantly lowers the power requirements while making supercomputers more efficient. The process uses a mathematical technique developed in the 1600s by Isaac Newton and Joseph Raphson that cut down on the number of calculations done by a computer. Computers normally calculate every mathematical formula to the seventh or eight decimal point, but using the Newton-Raphson tool can reduce the calculations to only the third or fourth decimal place while also increasing the accuracy of the calculations by three orders of magnitude (1000 times).

This method drastically reduces the amount of time needed process data, which makes the supercomputer faster while drastically reducing the amount of energy needed to perform a given calculation. This has huge implications when running complex simulations such as weather forecasting programs that require the crunching of huge amounts of data. Such programs can be run much more quickly while producing significantly more accurate results.

Who’s Pursuing Residential 5G?

I’ve seen article after article over the last year talking about how 5G is going to bring gigabit speeds to residents and give them an alternative to the cable companies. But most of the folks writing these articles are confusing the different technologies and businesses cases that are all being characterized as 5G.

For example, Verizon has announced plans to aggressively pursue 5G for commercial applications starting later this year. The technology they are talking about is a point-to-point wireless link, reminiscent of the radios that have been commonly used since MCI deployed microwave radios to disrupt Ma Bell’s monopoly. The new 5G radios use higher frequencies in the millimeter range and are promising to deliver a few gigabits of speed over distance of a mile or so.

The technology will require a base transmitter and enough height to have a clear-line-of-sight to the customer, likely sited on cell towers or tall buildings. The links are only between the transmitter and one customer. Verizon can use the technology to bring gigabit broadband to buildings not served with fiber today or to provide a second redundant broadband feed to buildings with fiber.

The press has often confused this point-to-point technology with the technology that will be used to bring gigabit broadband to residential neighborhoods. That requires a different technology that is best described as wireless local loops. The neighborhood application is going to require pole-mounted transmitters that will be able to serve homes within perhaps 1,000 feet – meaning a few homes from each transmitter. In order to deliver gigabit speeds the pole-mounted transmitters must be fiber fed, meaning that realistically fiber must be strung up each street that is going to get the technology.

Verizon says it is investigating wireless local loops and it hopes someday to eventually use the technology to target 30 million homes. The key word there is eventually, since this technology is still in the early stages of field trials.

AT&T has said that it is not pursuing wireless local loops. On a recent call with investors, CFO John Stevens said that AT&T could not see a business case for the technology. He called the business case for wireless local loops tricky and said that in order to be profitable a company would have to have a good grasp on who was going to buy service from each transmitter. He says that AT&T is going to stick to it’s current network plans which involve edging out from existing fiber and that serving customers on fiber provides the highest quality product.

That acknowledgement is the first one I’ve heard from one of the big telcos talking about the challenges of operating a widespread wireless network. We know from experience that fiber-to-the-home is an incredibly stable technology. Once installed it generally needs only minor maintenance and requires far less maintenance labor that competing technologies. We also know from many years of experience that wireless technologies require a lot more tinkering. Wireless technology is a lot more temperamental and it might take a decade or more of continuous tweaking until wireless local loop become as stable as FTTH. Whoever deploys the first big wireless local loop networks .better have a fleet of technicians ready to keep it working well.

The last of the big telcos as CenturyLink and their new CEO Jeff Storey has made it clear that the company is going to focus on high-margin enterprise business opportunities and will stop deploying slow-payback technologies like residential broadband. I think we’ve seen the end of CenturyLink investing in any last-mile residential technologies.

So who will be deploying 5G wireless local loops? We know it won’t be AT&T or CenturyLink. We know Verizon is considering it but has made no commitment. It won’t be done by the cable companies which have upgraded to DOCSIS 3.1. There are no other candidates that are willing or able to spend the billions needed to deploy the new technology.

Every new technology needs to be adopted by at least one large ISP to become successful. Vendors won’t do the needed R&D or crank up the production process until they have a customer willing to place a large order for electronics. We’ve seen promising wireless technologies like LMDS and MMDS die in the past because no large ISP embraced the technologies and ordered enough gear to push the technology into the mainstream.

I look at the industry today and I just don’t see any clear success path 5G wireless loop electronics. The big challenged faced by wireless local loops is to become less expensive than fiber-to-the-home. Until the electronics go through a few rounds of improvements that only come after field deployment, the technology is likely to require more technician time than FTTH. It’s hard to foresee anybody taking the chance on this in any grand way.

Verizon could make the leap of faith and sink big money into an untried technology, but that’s risky. We’re more likely to keep seeing press releases talking about field trials and the potential for the 5G technology. But unless Verizon or some other big ISP commits to sinking billions of dollars into the gear it’s likely that 5G local loop technology will fizzle as has happened to other wireless technologies in the past.