Future Technology – May 2018

I’ve seen a lot of articles recently that promise big improvements in computer speeds, power consumption, data storage, etc.

Smaller Transistors. There has been an assumption that we are at the end of Moore’s Law due to reaching the limit on the smallness of transistors. The smallest commercially available transistors today are 10 nanometers in diameter. The smallest theoretical size for silicon transistors is around 7 nm since below that size the transistor can’t contain the electron flow due to a phenomenon called quantum tunneling.

However, scientists at the Department of Energy’s Lawrence Berkeley Laboratory have developed a 1 nanometer transistor gate, which is several magnitudes smaller than silicon transistors. The scientists used molybdenum disulfide, a lubricant commonly used in auto shops. Combining this material with carbon nanotubes allows electrons to be controlled at the 1 nm distance. Much work is still needed to go from lab to production, but this is the biggest breakthrough in transistor size in many years and if it works will provide a few more turns of Moore’s Law.

Better Data Storage. A team of scientists at the National University of Singapore have developed a technology that could be a leap forward in data storage technology. The breakthrough uses skyrmions which were identified in 2009. The scientists have combined cobalt and palladium into a film that is capable of housing the otherwise unstable skyrmions at room temperatures.

Once stabilized the skyrmions, at only a few manometers in size, can be used to store data. If these films can be stacked they would provide data storage with 100 times the density of current storage media. We need better storage since the amount of data we want to store is immense and expected to increase 10-fold over the next decade.

Energy Efficient Computers.  Ralph Merkle, Robert Freitas and others have created a theoretical design for a molecular computer than would be 100 billion times more energy efficient than today’s most energy efficient computers. This is done by creating a mechanical computer that creates small physical gates at the molecular level that mechanically open and close to create circuits. This structure would allow the creation of the basic components for computing such as AND, NAND, NOR, NOT, OR, XNOR and XOR gates without electronic components.

Today’s computers create heat due to the electrical resistance in components like transistors, and it’s this resistance that requires huge electricity bills to operate and then cool big data centers. Mechanical computer create less heat from the mechanical process of opening and closing logic gates, and this friction can be nearly eliminated by creating tiny gates at the molecular level.

More Powerful Supercomputers. Scientists at Rice University and the University of Illinois at Urbana-Champaign have developed a process that significantly lowers the power requirements while making supercomputers more efficient. The process uses a mathematical technique developed in the 1600s by Isaac Newton and Joseph Raphson that cut down on the number of calculations done by a computer. Computers normally calculate every mathematical formula to the seventh or eight decimal point, but using the Newton-Raphson tool can reduce the calculations to only the third or fourth decimal place while also increasing the accuracy of the calculations by three orders of magnitude (1000 times).

This method drastically reduces the amount of time needed process data, which makes the supercomputer faster while drastically reducing the amount of energy needed to perform a given calculation. This has huge implications when running complex simulations such as weather forecasting programs that require the crunching of huge amounts of data. Such programs can be run much more quickly while producing significantly more accurate results.

Who’s Pursuing Residential 5G?

I’ve seen article after article over the last year talking about how 5G is going to bring gigabit speeds to residents and give them an alternative to the cable companies. But most of the folks writing these articles are confusing the different technologies and businesses cases that are all being characterized as 5G.

For example, Verizon has announced plans to aggressively pursue 5G for commercial applications starting later this year. The technology they are talking about is a point-to-point wireless link, reminiscent of the radios that have been commonly used since MCI deployed microwave radios to disrupt Ma Bell’s monopoly. The new 5G radios use higher frequencies in the millimeter range and are promising to deliver a few gigabits of speed over distance of a mile or so.

The technology will require a base transmitter and enough height to have a clear-line-of-sight to the customer, likely sited on cell towers or tall buildings. The links are only between the transmitter and one customer. Verizon can use the technology to bring gigabit broadband to buildings not served with fiber today or to provide a second redundant broadband feed to buildings with fiber.

The press has often confused this point-to-point technology with the technology that will be used to bring gigabit broadband to residential neighborhoods. That requires a different technology that is best described as wireless local loops. The neighborhood application is going to require pole-mounted transmitters that will be able to serve homes within perhaps 1,000 feet – meaning a few homes from each transmitter. In order to deliver gigabit speeds the pole-mounted transmitters must be fiber fed, meaning that realistically fiber must be strung up each street that is going to get the technology.

Verizon says it is investigating wireless local loops and it hopes someday to eventually use the technology to target 30 million homes. The key word there is eventually, since this technology is still in the early stages of field trials.

AT&T has said that it is not pursuing wireless local loops. On a recent call with investors, CFO John Stevens said that AT&T could not see a business case for the technology. He called the business case for wireless local loops tricky and said that in order to be profitable a company would have to have a good grasp on who was going to buy service from each transmitter. He says that AT&T is going to stick to it’s current network plans which involve edging out from existing fiber and that serving customers on fiber provides the highest quality product.

That acknowledgement is the first one I’ve heard from one of the big telcos talking about the challenges of operating a widespread wireless network. We know from experience that fiber-to-the-home is an incredibly stable technology. Once installed it generally needs only minor maintenance and requires far less maintenance labor that competing technologies. We also know from many years of experience that wireless technologies require a lot more tinkering. Wireless technology is a lot more temperamental and it might take a decade or more of continuous tweaking until wireless local loop become as stable as FTTH. Whoever deploys the first big wireless local loop networks .better have a fleet of technicians ready to keep it working well.

The last of the big telcos as CenturyLink and their new CEO Jeff Storey has made it clear that the company is going to focus on high-margin enterprise business opportunities and will stop deploying slow-payback technologies like residential broadband. I think we’ve seen the end of CenturyLink investing in any last-mile residential technologies.

So who will be deploying 5G wireless local loops? We know it won’t be AT&T or CenturyLink. We know Verizon is considering it but has made no commitment. It won’t be done by the cable companies which have upgraded to DOCSIS 3.1. There are no other candidates that are willing or able to spend the billions needed to deploy the new technology.

Every new technology needs to be adopted by at least one large ISP to become successful. Vendors won’t do the needed R&D or crank up the production process until they have a customer willing to place a large order for electronics. We’ve seen promising wireless technologies like LMDS and MMDS die in the past because no large ISP embraced the technologies and ordered enough gear to push the technology into the mainstream.

I look at the industry today and I just don’t see any clear success path 5G wireless loop electronics. The big challenged faced by wireless local loops is to become less expensive than fiber-to-the-home. Until the electronics go through a few rounds of improvements that only come after field deployment, the technology is likely to require more technician time than FTTH. It’s hard to foresee anybody taking the chance on this in any grand way.

Verizon could make the leap of faith and sink big money into an untried technology, but that’s risky. We’re more likely to keep seeing press releases talking about field trials and the potential for the 5G technology. But unless Verizon or some other big ISP commits to sinking billions of dollars into the gear it’s likely that 5G local loop technology will fizzle as has happened to other wireless technologies in the past.

The Migration to an All-IP Network

Last month the FCC recommended that carriers adopt a number of security measures to help block against hacking in the SS7 Signaling System 7). Anybody with telephone network experience is familiar with the SS7 network. It has provided a second communication path that has been used to improve call routing and to implement the various calling features such as caller ID.

Last year it became public that the SS7 network has some serious vulnerabilities. In Germany hackers were able to use the SS7 network to connect to and empty bank accounts. Those specific flaws have been addressed, but security experts look at the old technology and realize that it’s open to attack in numerous ways.

It’s interesting to see the FCC make this recommendation because there was a time when it looked like SS7 would be retired and replaced. I remember reading articles over a decade ago that forecast the pending end of SS7. At that time everybody thought that our legacy telephone network was going to be quickly migrated to all-IP network and that older technologies like SS7 and TDM would retired from the telecom network.

This big push to convert to an IP voice network was referred by the FCC as the IP transition. The original goal of the transition was to replace the nationwide networks that connect voice providers. This nationwide network is referred to as the interconnection network and every telco, CLEC and cable company that is in the voice business is connected to it.

But somewhere along the line AT&T and Verizon high-jacked the IP transition. All of a sudden the transition was talking about converting last-mile TDM networks to digital. Verizon and AT&T want to tear down rural copper and largely replace it with cellular. This was not the intention of the original FCC plans. The agency wanted to require an orderly transition of the interconnection network, not the last-mile customer network. The idea was to design a new network that would better support an all-digital world while also still connecting to older legacy copper networks until they die a natural economic life. As an interesting side note, the same FCC has poured billions into extending the life of copper networks through the CAF II program.

Discussions about upgrading connections between carriers to IP fizzled out. The original FCC vision was to take a few years to study the best path to an all-IP interconnection network and then require telcos to move from the old TDM networks.

I recently had a client who wanted to establish an IP connection with one of the big legacy telcos. I know of some places where this is being done. The telco told my client that they still require interface using TDM, something that surprised my client. This particular big telco was not yet ready to accept IP trunking connections.

I’ve also noticed that the costs for my clients to buy connections into the SS7 network have climbed over the past few years. That’s really odd when you consider that these are old networks and the core technology is decades old. These networks have been fully depreciated for many years and the idea that the cost to use SS7 is climbing is absurd. This harkens back to paying $700 per month for a T1, something that sadly still exists in a few markets.

When the FCC first mentioned the IP transition I would have fully expected that TDM between carriers would have been long gone by now. And with that would have gone SS7. SS7 will still be around in the last-mile network and at the enterprise level since it’s built into the features used by telcos and in the older telephone systems owned by many businesses. The expectation from those articles a decade ago was that SS7 and other TDM-based technologies would slowly fizzle as older products were removed from the market. An IP-based telecom network is far more efficient and cost effective and eventually all telecom will be IP-based.

So I am a bit puzzled about what happened to the IP transition. I’m sure it’s still being talked about by policy-makers at the FCC, but the topic has publicly disappeared. Is this ever going to happen or will the FCC be happy to let the current interconnection network limp along in an IP world?

SDN Finally Comes to Telecom

For years we’ve heard that Software Defined Networking (SDN) is coming to telecom. There have been some movement in that area in routing on long-haul fiber routes, but mostly this network concept is not being used in telecom networks.

AT&T just announced the first major deployment of SDN. They will be introducing more than 60,000 ‘white box’ routers into their cellular networks. White box means that the routers are essentially blank generic hardware that comes with no software or operating systems. This differs from the normal routers from companies like Cisco that come with a full suite of software that defines how the box will function. In fact, from a cost perspective the software costs a lot more than the software in a traditional router.

AT&T will now be buying low-cost hardware and will load their own software onto the boxes. This is not a new concept and the big data center companies like Facebook and Google have been doing this for several years. SDN let’s a provider load only the software they need to support just the functions they need. The data center providers say that simplifying the software saves them a fortune in power costs and air conditioning since the routers are far more efficient.

AT&T is a little late to the game compared to the big web companies, and it’s probably taken them a lot longer to develop their own proprietary suite of cell site software since it’s a lot more complicated than switches in a big data center. They wouldn’t want to hand their cell sites over to new software until it’s been tested hard in a variety of environments.

This move will save AT&T a lot of money over time. There’s the obvious savings on the white box routers. But the real savings is in efficiency. AT&T has a fleet of employees and contractors whose sole function is to upgrade cell sites. If you’ve followed the company you’ve seen that it takes them a while to introduce upgrades into their networks as technicians often have to visit every cell site, each with different generics of operating hardware and software.

The company will still need to visit cell sites to make hardware changes, but the promise of SDN is that software changes can be implemented across their whole network in a short period of time. This means they can fix security flaws or introduce new features quickly. They will have a far more homogeneous network where cell sites use the same generics of hardware and software, which should reduce glitches and local problems. The company will save a lot on labor and contractor costs.

This isn’t good news for the rest of the industry. This means that Cisco and other router makers are going to sell far fewer telecom-specific routers. The smaller companies in the country have always ridden the coattails of AT&T and Verizon, whose purchase of switches and routers pulled down the cost of these boxes for everybody else. These big companies also pushed the switch manufacturers to constantly improve their equipment, and the volume of boxes sold justified the router manufacturers to do the needed R&D.

You might think that smaller carriers could also buy their own white box routers to also save money. This looks particularly attractive since AT&T is developing some of the software collaboratively with other carriers and making the generic software available to everybody. But the generic base software is not the same software that will run AT&T’s new boxes. They’ve undoubtedly sunken tens of millions into customizing the software further. Smaller carriers won’t have the resources to customize this software to make it fully functional.

This change will ripple through the industry in other ways. For years companies often hired technicians who had Cisco certification on various types of equipment, knowing that they understood the basics of how the software could be operated. But as Cisco and other routers are edged out of the industry there are going to be far fewer jobs for those who are Cisco certified. I saw an article a few years ago that predicted that SDN would decimate the technician work force by eliminating a huge percentage of jobs over time. AT&T will need surprisingly few engineers and techs at a central hub now to update their whole network.

We’ve known this change has been coming for five years, but now the first wave of it is here. SDN will be one of the biggest transformational technologies we’ve seen in years – it will make the big carriers nimble, something they have never been. And they are going to make it harder over time for all of the smaller carriers that compete with them – something AT&T doesn’t mind in the least.

The Demand for Upload Speeds

I was recently at a public meeting about broadband in Davis, California and got a good reminder of why upload speeds are as important to a community as download speeds. One of the people making public comments talked about how uploading was essential to his household and how the current broadband products on the market were not sufficient for his family.

This man needed good upload speeds for several reasons. First, he works as a photographer and takes pictures and shoots videos. He says that it takes hours to upload and send raw, uncompressed video to one of his customers and says the experience still feels like the dial-up days. His full-time job is working as a network security consultant for a company that specializes in big data. As such he needs to send and receive large files, and his home upload bandwidth is also inadequate for that – forcing him to go to an office for work that could otherwise be done from his home. Finally, his daughter creates YouTube content and has the same problem uploading content – which is particularly a problem when her content deals with time-sensitive current events and waiting four hours to get the content to YouTube kills the timeliness of her content.

This family is not unusual any more. A decade ago, a photographer led the community effort to get faster broadband in a city I was working with. But he was the only one asking for faster upload speeds and most homes didn’t care about it.

Today a lot of homes need faster upload speeds. This particular family had numerous reasons including working from home, sending large data files and posting original content to the web. But these aren’t the only uses for faster upload speeds. Gamers now need faster upload speeds. Anybody who wants to remotely check their home security cameras cares about upload speeds. And more and more people are migrating to 2-way video communications, which requires those at both ends to have decent uploading. We are just now seeing the early trials of virtual presence where communications will be by big-bandwidth virtual holograms at each end of the communications.

Davis is like many urban areas in that the broadband products available have slow upload speeds. Comcast is the cable incumbent, and while they recently introduced a gigabit download product, their upload speeds are still paltry. DSL is offered by AT&T which has even slower upload speeds.

Technologies differ in their ability to offer upload speeds. For instance, DSL is technically capable of sending the data at the same speeds for upload or download. But DSL providers have elected to stress the download speed, which is what most people value. So DSL products are set with small upload and a lot of download. It would be possible to give a customer the choice to vary the mix between upload and download speeds, but I’ve never heard of an ISP who tried to provide this as an option to customers.

Cable modems are a different story. Historically the small upload speeds were baked directly into the DOCSIS standard. When Cable Labs created DOCSIS they made upload speeds small in response to what cable companies asked from them. Until recently, cable companies have had no option to increase upload speeds beyond the DOCSIS constraints. But Cable Labs recently amended the new DOCSIS 3.1 standard to allow for much upload speeds of nearly a gigabit. The first release of the new DOCSIS 3.1 standard didn’t include this, but it’s now available.

However, a cable company has to make sacrifices in their network if they want to offer faster uploads. It takes about 24 empty channels (meaning no TV signal) on a cable system to provide gigabit download speeds. A cable company would need to vacate many more channels of programming to also offer faster uploads and I don’t think many of them will elect to do so. Programming is still king and cable owners need to balance the demand for more channels compared to demand for faster uploads.

Fiber has no real constraints on upload speeds up to the capability of the lasers. The common technologies being used for residential fiber all allow for gigabit upload speeds. Many fiber providers set speeds to symmetrical, but others have elected to limit upload speeds. The reason I’ve heard for that is to limit the attractiveness of their network for spammers and others who would steal the use of fast uploading. But even these networks offer upload speeds that are far faster than the cable company products.

As more households want to use uploading we are going to hear more demands for a faster upload option. But for now, if you want super-fast upload speeds you have to be lucky enough to live in a neighborhood with fiber-to-the-home.

The Looming Backhaul Crisis

I look forward a few years and I think we are headed towards a backhaul crisis. Demand for bandwidth is exploding and we are developing last-mile technologies to deliver the needed bandwidth, but we are largely ignoring the backhaul network needed to feed customer demand. I foresee two kinds of backhaul becoming a big issue in the next few years.

First is intercity backhaul. I’ve read several predictions that we are already using most of the available bandwidth on the fibers that connect major cities and the major internet POPs. It’s not hard to understand why. Most of the fiber between major cities was built in the late 1990s or even earlier, and much of that construction was funded by the telecom craze of the 90s where huge money was dumped into the sector.

But there has been very little new fiber construction on major routes since then, and I don’t see any carriers with business plans to build more fiber. You’d think that we could get a lot more bandwidth out of the existing fiber routes by upgrading the electronics on those fiber, but that’s not the long-haul fiber network operates. Almost all of the fiber pairs on existing routes have been leased out to various entities for their own private uses. The reality is that nobody really ‘owns’ these fiber routes since the routes are full of carriers that each have a long-term contract to use a few of the fibers. As long as any of these entities has enough bandwidth for their own network purposes they are not going to sink the big money into upgrading to terabit lasers, which are still very expensive.

Underlying that is a problem that nobody wants to talk about. Many of those fibers are aging and deteriorating. Over time fiber runs into problems and gets opaque. This can come from having too many splices in the fiber, or from accumulated microscopic damage from stress during fiber construction or due to temperature fluctuations. Fiber technology has improved tremendously since the 1990s – contractors are more aware of how to handle fiber during the construction period and the glass itself has improved significantly through improvements by the manufacturers.

But older fiber routes are slowly getting into physical trouble. Fibers go bad or lose capacity over time. This is readily apparent when looking at smaller markets. I was helping a client look at fibers going to Harrisburg, PA and the fiber routes into the city are all old and built in the early 90s and are experiencing regular outages. I’m not pointing out Harrisburg as a unique case, because the same is true for a huge number of secondary communities.

We are going to see a second backhaul shortage that is related to the intercity bandwidth shortage. All of the big carriers are talking about building fiber-to-the-home and 5G networks that are capable of delivering gigabit speeds to customers. But nobody is talking about how to get the bandwidth to these neighborhoods. You are not going to be able to feed hundreds of 5G fixed wireless transmitters using the existing bandwidth that is available in most places.

Today the cellular companies are paying a lot of money to get gigabit pipes to the big cell towers. Most recent contracts include the ability for these connections to burst to 5 or 10 gigabits. Getting these connections is already a challenge. Picture multiplying that demand by hundreds and thousands of new cell sites. To use the earlier example of Harrisburg, PA – picture somebody trying to build a 100-node 5G network there, each with gigabit connections to customers. This kind of network might initially work with a 10 gigabit backhaul connection, but as bandwidth demand keeps growing (doubling every three years), it won’t take long until this 5G networks will need multiple 10 gigabit connections, up to perhaps 100 gigabits.

Today’s backhaul network is not ready to supply this kind of bandwidth. You could build all of the fiber you want locally in Harrisburg to feed the 5G nodes, but that won’t make any difference if you can’t feed that whole network with sufficient bandwidth to get back to an Internet POP.

Perhaps a few carriers will step up and build the needed backhaul network. But I don’t see that multi-billion dollar per year investment listed in anybody’s business plans today – all I hear about are plans to rush to capture the residential market with 5G. Even if carriers step up and bolster the major intercity routes (and somebody probably will), that is only a tiny portion of the backhaul network that stretches to all of the Harrisburg markets in the country.

The whole backhaul network is already getting swamped due the continued geometric growth of broadband demand. Local networks and backhaul networks that were vigorous just a few years ago can get overwhelmed by a continuous doubling of traffic volume. If you look at any one portion of our existing backhaul network you can already see the stress today, and that stress will turn into backhaul bottlenecks in the near future.

Charter’s Plans for 6G

It didn’t take long for somebody say they will have a 6G cellular product. Somebody has jumped the gun every time there has been migration to a new cellular standard, and I remember the big cellular companies making claims about having 4G LTE technology years before it was actually available.

But this time it’s not a cellular company talking about 6G – it’s Charter, the second largest US cable company. Charter is already in the process of implementing LTE cellular through the resale of wholesale minutes from Verizon – so they will soon be a cellular provider. If we look at the early success of Comcast they might do well since Charter has almost 24 million broadband customers.

Tom Rutledge, the Charter CEO made reference to 5G trials being done by the company, but also went on to tout a new Charter product as 6G. What Rutledge is really talking about is a new product that will put a cellular micro cell in a home that has Charter broadband. This hot spot would provide strong cellular coverage within the home and use the cable broadband network for backhaul for the calls.

Such a network would benefit Charter by collecting a lot of cellular minutes that Charter wouldn’t have to buy wholesale from Verizon. Outside of the home customers would roam on the Verizon network, but within the home all calls would route over the landline connection. Presumably, if the home cellular micro transmitters are powerful enough, neighbors might also be able to get cellular access if they are Charter cellular customers. This is reminiscent of the Comcast WiFi hotspots that broadcast from millions of their cable modems.

This is not a new idea. For years farmers have been buying cellular repeaters from AT&T and Verizon to boost their signal if they live near the edge of cellular coverage. These products also use the landline broadband connection as backhaul – but in those cases the calls route to one of the cellular carriers. But in this configuration Charter would intercept all cellular traffic and presumably route the calls themselves. There are also a number of cellular resellers who have been using landline backhaul to provide low-cost calling.

This would be the first time that somebody has ever contemplated this on a large scale. One can picture large volumes of Charter cellular micro sites in areas where they are the incumbent cable company. When enough homes have transmitters they might almost create a ubiquitous cellular network that is landline based – eliminating the need for cellular towers.

It’s an interesting concept. A cable company in some ways is already well positioned to implement a more traditional small cell cellular network. Once they have upgraded to DOCSIS 3.1 they can place a small cell site at any pole that is already connected to the cable network. For now the biggest hurdle to such a deployment is the small data upload speeds for the first generation of DOCSIS 3.1, but cable labs has already released a technology that will enable faster upload speeds, up to synchronous connections. Getting faster upload speeds means finding some more empty channel slots on the cable network and could be a challenge in some networks.

The most interesting thing about this idea is that anybody with a broadband network could offer cellular service in the same way if they can make a deal to buy wholesale minutes. But therein lies the rub. While there are now hundreds of ‘cellular’ companies, only a few of them own their own cellular networks and everybody else is reselling. Charter is large enough to probably feel secure about having access to long-term cellular minutes from the big cellular companies. But very few other landline ISPs are going to get that kind of locked arrangement.

I’ve always advised clients to be wary of any resell opportunity because the business can change on a dime when the underlying provider changes the rules of the game. Our industry is littered with examples of companies that went under when the large resale businesses they had built lost their wholesale product. The biggest such company that comes to mind was Talk America that had amassed over a million telephone customers on resold lines from the big telcos. But there are many other examples of paging resellers, long distance resellers and many other telco product reselling that only lasted as long as the underlying network providers agreed to supply the commodity. But this is such an intriguing idea that many landline ISPs are going to look at what Charter is doing and wonder why they can’t do the same.

Dig Once Rules Coming

US Representative Anna Eshoo of California has submitted a ‘dig once’ bill every year since 2009, and the bill finally passed in the House. For this to become law the bill still has to pass the Senate, but it got wide bipartisan support in the House.

Dig Once is a simple concept that would mandate that when roads are under construction that empty conduit is places in the roadbed to provide inexpensive access for somebody that wants to bring fiber to an area.

Here are some specifics included in the bill:

  • This would apply to Federal highway projects, but also to state projects that get any federal funding. It encourages states to apply this more widely.
  • For any given road project there would be ‘consultation’ with local and national telecom providers and conduit would be added if there is an expected demand for fiber within 15 years.
  • The conduit would be installed under the hard surface of the road at industry standard depths.
  • The conduits would contain pull tape that would allow for easy pulling of fiber in the future.
  • Handholes would be placed at intervals consistent with industry best practices.

This all sounds like good stuff, but I want to play devil’s with some of the requirements.

The initial concept of dig once was to never pass up the opportunity to place conduit into an ‘open ditch’. The cost of digging to put in conduit probably represents 80% of the cost of deployment in most places. But this law is not tossing conduit into open construction ditches. It instead requires that the conduit be placed at depths that meet industry best practices. And that is going to mean digging at a foot or more deeper than the construction that was planned for the roadbed.

To understand this you have to look at the lifecycle of roads. When a new road is constructed the road bed is typically dug from 18 inches deep to 3 feet deep depending upon the nature of the subsoil and also based upon the expected traffic on the road (truck-heavy highways are built to a higher standard than residential streets). Typically roads are then periodically resurfaced several times when the road surface deteriorates. Resurfacing usually requires going no deeper than a few inches into the roadbed. But at longer intervals of perhaps 50 years (differs by local conditions) a road is fully excavated to the bottom of the roadbed and the whole cycle starts again.

This means that the conduit needs to be placed lower than the planned bottom of the roadbed. Otherwise, when the road is finally rebuilt all of the fiber would be destroyed. And going deeper means additional excavation and additional cost. This means the conduit would not be placed in the ‘open ditch’. The road project will have dug out the first few feet of the needed excavation, but additional, and expensive work would be needed to put the conduit at the safe depth. In places where that substrate is rock this could be incredibly expensive, but it wouldn’t be cheap anywhere. It seems to me that this is shuttling the cost of deploying long-haul fiber projects to road projects, rather than to fiber providers. There is nothing wrong with that if it’s the national policy and there are enough funds to pay for it – but I worry that in a country that already struggles to maintain our roads that this will just means less road money for roads since every project just got more expensive.

The other issue of concern to me is handholes and access to the fiber. This is pretty easy for an Interstate and there ought to be fiber access at every exit. There are no customers living next to Interstates and these are true long-haul fibers that stretch between communities.

But spacing access points along secondary roads is a lot more of a challenge. For instance, if you want a fiber route to be used to serve businesses and residents in a city this means an access point every few buildings. In more rural areas it means an access point at every home or business. Adding access points to fiber is the second most labor-intensive part of the cost after the cost of construction. If access points aren’t where they are needed, in many cases the fiber will be nearly worthless. It’s probably cheaper in the future to build a second fiber route with the proper access points than it is to try to add them to poorly designed existing fiber route.

This law has great intentions. But it is based upon the concept that we should take advantage of construction that’s already being paid for. I heartily support the concept for Interstate and other long-haul highways. But the concept is unlikely to be sufficient on secondary roads with lots of homes and businesses. And no matter where this is done it’s going to add substantial cost to highway projects.

I would love to see more fiber built where it’s needed. But this bill adds a lot of costs to building highways, which is already underfunded in the country. And if not done properly – meaning placing fiber access points where needed – this could end up building a lot of conduit that has little practical use for a fiber provider. By making this a mandate everywhere it is likely to mean spending a whole lot of money on conduit that might never be used or used only for limited purposes like feeding cellular towers. This law is not going to create fiber that’s ready to serve neighborhoods or those living along highways.

Virtual Reality and Broadband

For the second year in a row Turner Sports, in partnership with CBS and the NCAA will be streaming March Madness basketball games in virtual reality. Watching the games has a few catches. The content can only be viewed on two VR sets – the Samsung Gear VR and the Google Daydream View. Viewers can buy individual games for $2.99 or buy them all for $19.99. And a viewer must be subscribed to the networks associated with the broadcasts – CBS, TNT, TBS and truTV.

Virtual reality viewers get a lot of options. They can choose which camera to watch from or else opt for the Turner feed that switches between cameras. When the tournament reaches the Sweet 16 viewers will receive play-by-play from a Turner team broadcasting only for VR viewers. The service also comes with a lot of cool features like the ability to see stats overlays on the game or on a particular player during the action. Games are not available for watching later, but there will be a big library of game highlights.

Last year Turner offered the same service, but only for 6 games. This year the line-up has been expanded to 21 games that includes selected regionals in the first and second round plus Sweet Sixteen and Elite Eight games. The reviews from last year’s viewers were mostly great and Turner is expecting a lot more viewers this year.

Interestingly none of the promotional materials mention the needed bandwidth. The cameras being used for VR broadcasts are capable of capturing virtual reality in 4K. But Turner won’t be broadcasting in 4K because of the required bandwidth. Charles Cheevers, the CTO of Arris said last year that a 720p VR stream in 4K requires at least a 50 Mbps connection. That’s over 30 times more bandwidth than a Netflix stream.

Instead these games will be broadcast in HD video at 60 frames per second. According to Oculus that requires a data stream of 14.4 Mbps for ideal viewing. Viewing at slower speeds results in missing some of the frames. Many VR viewers complain about getting headaches while watching VR, and the primary reason for that the headaches is missing frames. While the eye might not be able to notice the missing frames the brain apparently can.

One has to ask if this is the future of sports. The NFL says it’s not ready yet to go to virtual reality until there is more standardization between different VR sets – they fear for now that VR games will have a limited audience due to the number of viewers with the right headsets. But the technology has been tried for football and Fox broadcast the Michigan – Notre Dame game last fall in virtual reality.

All the sports networks have to be looking at the Turner pricing of $2.99 per game and calculating the potential new revenue stream from broadcasting more games in VR in addition to traditional cable broadcasts. Some of the reviews I read of last year’s NCAA broadcasts said that after watching a game in VR that normal TV broadcasts seemed boring. Many of us familiar with this feeling. I can’t watch linear TV any more. It’s not just sitting through the commercials, but it’s being captive to the stream rather than watching the way I want. We can quickly learn to love a better experience.

Sports fans are some of the most intense viewers of any content. It’s not hard to imagine a lot of sports fans wanting to watch basketball, football, hockey or soccer in VR. Since the format favors action sports it’s also not hard to imagine the format also drawing viewers to rugby, lacrosse and other action sports.

It’s possible that 4K virtual reality might finally be the app that justifies fast fiber connections. There is nothing else on the Internet today that requires that much speed plus low latency. Having several simultaneous viewers in a home watching 4K VR would require speeds of at least a few hundred Mbps. You also don’t need to look out too far to imagine virtual reality in 8K, requiring a data stream of at least 150 Mbps – which might be the first home application that can justify a gigabit connection.

Spectrum and 5G

All of the 5G press has been talking about how 5G is going to be bringing gigabit wireless speeds everywhere. But that is only going to be possible with millimeter wave spectrum, and even then it requires a reasonably short distance between sender and receiver as well as bonding together more than one signal using multiple MIMO antennae.

It’s a shame that we’ve let the wireless marketeers equate 5G with gigabit because that’s what the public is going to expect from every 5G deployment. As I look around the industry I see a lot of other uses for 5G that are going to produce speeds far slower than a gigabit. 5G is a standard that can be applied to any wireless spectrum and which brings some benefits over earlier standards. 5G makes it easier to bond multiple channels together for reaching one customer. It also can increase the number of connections that can be made from any given transmitter – with the biggest promise that the technology will eventually allow connections to large quantities of IOT devices.

Anybody who follows the industry knows about the 5G gigabit trials. Verizon has been loudly touting its gigabit 5G connections using the 28 GHz frequency and plans to launch the product in up to 28 markets this year. They will likely use this as a short-haul fiber replacement to allow them to more quickly add a new customer to a fiber network or to provide a redundant data path to a big data customer. AT&T has been a little less loud about their plans and is going to launch a similar gigabit product using 39 GHz spectrum in three test markets soon.

But there are also a number of announcements for using 5G with other spectrum. For example, T-Mobile has promised to launch 5G nationwide using its 600 MHz spectrum. This is a traditional cellular spectrum that is great for carrying signals for several miles and for going around and through obstacles. T-Mobile has not announced the speeds it hopes to achieve with this spectrum. But the data capacity for 600 MHz is limited and binding numerous signals together for one customer will create something faster then LTE, but not spectacularly so. It will be interesting to see what speeds they can achieve in a busy cellular environment.

Sprint is taking a different approach and is deploying 5G using the 2.5 GHz spectrum. They have been testing the use of massive MIMO antenna that contain 64 transmit and 64 receive channels. This spectrum doesn’t travel far when used for broadcast, so this technology is going to be used best with small cell deployments. The company claims to have achieved speeds as fast as 300 Mbps in trials in Seattle, but that would require binding together a lot of channels, so a commercial deployment is going to be a lot slower in a congested cellular environment.

Outside of the US there seems to be growing consensus to use 3.5 GHz – the Citizens Band radio frequency. That raises the interesting question of which frequencies will end up winning the 5G race. In every new wireless deployment the industry needs to reach an economy of scale in the manufacture of both the radio transmitters and the cellphones or other receivers. Only then can equipment prices drop to the point where a 5G capable phone will be similar in price to a 4GLTE phone. So the industry at some point soon will need to reach a consensus on the frequencies to be used.

In the past we rarely saw a consensus, but rather some manufacturer and wireless company won the race to get customers and dragged the rest of the industry along. This has practical implications for early adapters of 5G. For instance, somebody buying a 600 MHz phone from T-Mobile is only going to be able to use that data function when near to a T-Mobile tower or mini-cell. Until industry consensus is reached, phones that use a unique spectrum are not going to be able to roam on other networks like happens today with LTE.

Even phones that use the same spectrum might not be able to roam on other carriers if they are using the frequency differently. There are now 5G standards, but we know from practical experience with other wireless deployments in the past that true portability between networks often takes a few years as the industry works out bugs. This interoperability might be sped up a bit this time because it looks like Qualcomm has an early lead in the manufacture of 5G chip sets. But there are other chip manufacturers entering the game, so we’ll have to watch this race as well.

The word of warning to buyers of first generation 5G smartphones is that they are going to have issues. For now it’s likely that the MIMO antennae are going to use a lot of power and will drain cellphone batteries quickly. And the ability to reach a 5G data signal is going to be severely limited for a number of years as the cellular providers extend their 5G networks. Unless you live and work in the heart of one of the trial 5G markets it’s likely that these phones will be a bit of a novelty for a while – but will still give a user bragging rights for the ability to get a fast data connection on a cellphone.