Massive MIMO

One of the technologies that will bolster 5G cellular is the use of massive MIMO (multiple-input, multiple-output) antenna arrays. Massive MIMO is an extension of smaller MIMO antennas that have been use for several years. For example, home WiFi routers now routinely use multiple antennas to allow for easier connections to multiple devices. Basic forms of the MIMO technology have been deployed in LTE cell sites for several years.

Massive MIMO differs from current technology by the use of big arrays of antennas. For example, Sprint, along with Nokia demonstrated a massive MIMO transmitter in 2017 that used 128 antennas, with 64 for receive and 64 for transmit. Sprint is in the process of deploying a much smaller array in cell sites using the 2.5 GHz spectrum.

Massive MIMO can be used in two different ways. First, multiple transmitter antennas can be focused together to reach a single customer (who also needs to have multiple receivers) to increase throughput. In the Sprint trial mentioned above Sprint and Nokia were able to achieve a 300 Mbps connection to a beefed-up cellphone. That’s a lot more bandwidth than can be achieved from one transmitter, which at the most could deliver whatever bandwidth is possible on the channel of spectrum being used.

The extra bandwidth is achieved in two ways. First, using multiple transmitters means that multiple channels of the same frequency can be sent simultaneously to the same receiving device. Both the transmitter and receiver must have the sophisticated and powerful computing power to coordinate and combine the multiple signals.

The bandwidth is also boosted by what’s called precoding or beamforming. This technology coordinates the signals from multiple transmitters to maximize the received signal gain and to reduce what is called the multipath fading effect. In simple terms the beamforming technology sets the power level and gain for each separate antenna to maximize the data throughput. Every frequency and its channel operates a little differently and beamforming favors the channels and frequency with the best operating capabilities in a given environment. Beamforming also allows for the cellular signal to be concentrated in a portion of the receiving area – to create a ‘beam’. This is not the same kind of highly concentrated beam that is used in microwave transmitters, but the concentration of the radio signals into the general area of the customer means a more efficient delivery of data packets.

The cellular companies, though, are focused on the second use of MIMO – the ability to connect to more devices simultaneously. One of the key parameters of the 5G cellular specifications is the ability of a cell site to make up to 100,000 simultaneous connections. The carriers envision 5G is the platform for the Internet of Things and want to use cellular bandwidth to connect to the many sensors envisioned in our near-future world. This first generation of massive MIMO won’t bump cell sites to 100,000 connections, but it’s a first step at increasing the number of connections.

Massive MIMO is also going to facilitate the coordination of signals from multiple cell sites. Today’s cellular networks are based upon a roaming architecture. That means that a cellphone or any other device that wants a cellular connection will grab the strongest available cellular signal. That’s normally the closest cell site but could be a more distant one if the nearest site is busy. With roaming a cellular connection is handed from one cell site to the next for a customer that is moving through cellular coverage areas.

One of the key aspects of 5G is that it will allow multiple cell sites to connect to a single customer when necessary. That might mean combining the signal from a MIMO antenna in two neighboring cell sites. In most places today this is not particularly useful since cell sites today tend to be fairly far apart. But as we migrate to smaller cells the chances of a customer being in range of multiple cell sites increases. The combining of cell sites could be useful when a customer wants a big burst of data, and coordinating the MIMO signals between neighboring cell sites can temporarily give a customer the extra needed bandwidth. That kind of coordination will require sophisticated operating systems at cell sites and is certainly an area that the cellular manufacturers are now working on in their labs.

The Continued Growth of Data Traffic

Every one of my clients continues to see explosive growth of data traffic on their broadband networks. For several years I’ve been citing a statistic used for many years by Cisco that says that household use of data has doubled every three years since 1980. In Cisco’s last Visual Networking Index published in 2017 the company predicted a slight slowdown in data growth to now double about every 3.5 years.

I searched the web for other predictions of data growth and found a report published by Seagate, also in 2017, titled Data Age 2025: The Evolution of Data to Life-Critical. This report was authored for Seagate by the consulting firm IDC.

The IDC report predicts that annual worldwide web data will grow from the 16 zettabytes of data used in 2016 to 163 zettabytes in 2025 – a tenfold increase in nine years. A zettabyte is a mind-numbingly large number that equals a trillion gigabytes. That increase means an annual compounded growth rate of 29.5%, which more than doubles web traffic every three years.

The most recent burst of overall data growth has come from the migration of video online. IDC expects online video to keep growing rapidly, but also foresees a number of other web uses that are going to increase data traffic by 2025. These include:

  • The continued evolution of data from business background to “life-critical”. IDC predicts that as much as 20% of all future data will become life-critical, meaning it will directly impact our daily lives, with nearly half of that data being hypercritical. As an example, they mention the example of how a computer crash today might cause us to lose a spreadsheet, but that data used to communicate with a self-driving car must be delivered accurately. They believe that the software needed to ensure such accuracy will vastly increase the volume of traffic on the web.
  • The proliferation of embedded systems and the IoT. Today most IoT devices generate tiny amounts of data. The big growth in IoT data will not come directly from the IoT devices and sensors in the world, but from the background systems that interpret this data and make it instantly usable.
  • The increasing use of mobile and real-time data. Again, using the self-driving car as an example, IDC predicts that more than 25% of data will be required in real-time, and the systems necessary to deliver real-time data will explode usage on networks.
  • Data usage from cognitive computing and artificial intelligence systems. IDC predicts that data generated by cognitive systems – machine learning, natural language processing and artificial intelligence – will generate more than 5 zettabytes by 2025.
  • Security systems. As we have more critical data being transmitted, the security systems needed to protect the data will generate big volumes of additional web traffic.

Interestingly, this predicted growth all comes from machine-to-machine communications that are a result of us moving more daily functions onto the web. Computers will be working in the background exchanging and interpreting data to support activities such as traveling in a self-driving car or chatting with somebody in another country using a real-time interpreter. We are already seeing the beginning stages of numerous technologies that will require big real time data.

Data growth of this magnitude is going to require our data networks to grow in capacity. I don’t know of any client network that is ready to handle a ten-fold increase in data traffic, and carriers will have to beef up backbone networks significantly over time. I have often seen clients invest in new backbone electronics that they hoped to be good for a decade, only to find the upgraded networks swamped within only a few years. It’s hard for network engineers and CEOs to fully grasp the impact of continued rapid data growth on our networks and it’s more common than not to underestimate future traffic growth.

This kind of data growth will also increase the pressure for faster end-user data speeds and more robust last-mile networks. If a rural 10 Mbps DSL line feels slow today, imagine how slow that will feel when urban connections are far faster than today. If the trends IDC foresees hold true, by 2025 there will be many homes needing and using gigabit connections. It’s common, even in the industry to scoff at the usefulness of residential gigabit connections, but when our use of data needs keeps doubling it’s inevitable that we will need gigabit speeds and beyond.

Optical Loss on Fiber

One issue that isn’t much understood except by engineers and fiber technicians is optical loss on fiber. While fiber is an incredibly efficient media for transmitting signals there are still factors that cause the signal to degrade. In new fiber routes these factors are usually minor, but over time problems with fiber accumulate. We’re now seeing some of the long-haul fibers from the 1980s go bad due to accumulated optical signal losses.

Optical signal loss is described as attenuation. Attenuation is a reduction in the power and clarity of a light signal that diminishes the ability of a receiving laser to demodulate the data being received. Any factor that degrades the optical signal is said to increase the attenuation.

Engineers describe several kinds of phenomenon that can degrade a fiber signal:

  • Chromatic Dispersion. This is the phenomenon where a signal gets distorted over distance as the different frequencies of light travel at different speeds. Lasers don’t generally create only one light frequency, but a range of slightly different colors, and different colors of light travel through the fiber at slightly different speeds. This is one of the primary factors that limits the distance that a fiber signal can be sent without needing to pass through a repeater to restart and synchronize all of the separate light paths. More expensive lasers can generate purer light signals and can transmit further. These better lasers are used on long haul fiber routes that might go 60 miles between repeaters while FTTH networks aren’t recommended to travel more than 10 miles.
  • Modal Dispersion. Some fibers are designed to have slightly different paths for the light signal and are called multimode fibers. A fiber system can transmit different date paths through the separate modes. A good analogy for the modes is to think of them as separate tubes inside of a conduit. But these are not physically separated paths and the modes are created by having different parts of the fiber strand to be made of a slightly different glass material. Modal dispersion comes from the light traveling at slightly different speeds through the different modes.
  • Insertion Loss. This is loss of signal that happens when the light signal moves from one media to another. Insertion losses occurs at splice points, where fiber passes through a connector, or when the signal is regenerated through a repeater or other device sitting in the fiber path.
  • Return Loss. This is the lost of signal due to interference caused when some parts of the light are reflected backwards in the fiber. While the glass used in fiber is clear, it’s never perfect and some photons are reflected backwards and interfere with oncoming light signals.

Fiber signal loss can be measured with test equipment that measure the delay in a fiber signal compared to an ideal signal. The losses are expressed in decibels (dB).  New fiber networks are designed with a low total dB loss so that there is headroom over time to accommodate natural damage and degradation. Engineers are able to calculate the amount of loss that can be expected for a signal traveling through a fiber network – called a loss budget. For example, they know that a fiber signal will degrade some specific amount, say 1 dB just from passing through a certain type of fiber. They might expect a loss of 0.3 dB for each splice along a fiber and 0.75 dB when a fiber passes through a connector.

The biggest signal losses on fiber generally come at the end of a fiber path at the customer premise. Flaws like bends or crimps in the fiber might increase return loss. Going through multiple splices increases the insertion loss. Good installation practices are by far the most important factor in minimizing attenuation and providing for a longer life for a given fiber path.

Network engineers also understand that over time that fibers degrade, Fibers might get cut and have to be re-spliced. Connectors get loose and don’t make perfect light connections. Fiber can expand and shrink from temperature extremes and create more reflection. Tiny manufacturing flaws like microscopic cracks will grow over time and create opacity and disperse the light signal.

This is not all bad news and modern fiber electronics allow for a fairly high level of dB loss before the fiber loses functionality. A fiber installed properly, using quality connections and with good splices can last a long time.

Telecom Containers

There is a new lingo being used by the large telecom companies that will be foreign to the rest of the industry – containers. In the simplest definition, a container is a relatively small set of software that performs one function. The big carriers are migrating to software systems that use containers for several reasons, the primary being the migration to software defined networks.

A good example of a container is a software application for a cellular company that can communicate with the sensors used in crop farming. The cellular carrier would install this particular container in cell sites where there is a need to communicate with field sensors but would not install the container at the many cell sites where such communications isn’t needed.

The advantage to the cellular carrier is that they have simplified their software deployment. A rural cell site will have a different set of containers than a small cell site deployed near a tourist destination or a cell site deployed in a busy urban business district.

The benefits of this are easy to understand. Consider the software that operates our PCs. The PC manufacturers fill the machine up with every applications a user might ever want. However, most of us use perhaps 10% of the applications that are pre-installed on our computer. The downside to having so many software components is that it takes a long time to upgrade the software on a PC – my iMac laptop has taken an hour at times to compile a new operating system update.

In a software defined network, the ideal configuration is to move as much of the software as possible to the edge devices – in this particular example, to the cell site. Today every cell site much hold and process all of the software needed by any cell site anywhere. That’s both costly, in terms of computing power needed at the cell site as well as inefficient, in that the cell site are running applications that will never be used. In a containerized network each cell site will run only the modules needed locally.

The cellular carrier can make an update to the farm sensor container without interfering with the other software at a cell site. That adds safety – if something goes wrong with that update, only the farm sensor network will experience a problem instead of possibly pulling down the whole network of cell sites. One of the biggest fears of operating a software defined network is that an upgrade that goes wrong could pull down the entire network. Upgrades made to specific containers are much safer, from a network engineering perspective, and if something goes wrong in an upgrade the cellular carrier can quickly revert to the back-up for the specific container to reestablish service.

The migration to containers makes sense for a big telecom carrier. Each carrier can develop unique containers that defines their specific product set. In the past most carriers bought off-the-shelf applications like voice mail – but with containers they can more easily customize products to operate as they wish.

Like most things that are good for the big carriers, there is a long-term danger from containers for the rest of us. Over time the big carriers will develop their own containers and processes that are unique to them. They’ll create much of this software in-house and the container software won’t be made available to others. This means that the big companies can offer products and features that won’t be readily available to smaller carriers.

In the past the products and features available to smaller ISPs are due to product research done by telecom vendors for the big ISPs. Vendors developed software for cellular switches, voice switches, routers, settop boxes, ONTs and all of the hardware used in the industry. Vendors could justify spending money on software development due to expected sales to the large ISPs. However, as the ISPs migrate to a world where they buy empty boxes and develop their own container software there won’t be a financial incentive for the hardware vendors to put effort into software applications. Companies like Cisco are already adapting to this change and it’s going to trickle through the whole industry over the next few years.

This is just one more thing that will make it a little harder in future years to compete with the big ISPs. Perhaps smaller ISPs can band together somehow and develop their own product software, but it’s another industry trend that will give the big ISPs an advantage over the rest of us.

No Takers for Faster DSL

It’s been obvious for over a decade that the big telcos have given up on DSL. AT&T was the last big telco to bite on upgraded DSL. They sold millions of lines of U-verse connections that combined two pairs of copper and using VSDL or ADSL2 to deliver up to 50 Mbps download speeds. Those speeds were only available to customers who lived with 3,000 – 4,000 feet from a DSL hub, but for a company that owns all of the copper, that was a lot of potential customers.

Other big telcos didn’t bite on the paired-copper DSL and communities served by Verizon, CenturyLink, Frontier and others are still served by older DSL technologies that delivers speeds of 15 Mbps or less.

The companies that manufacture DSL continued to do research and have developed faster DSL technologies. The first breakthrough was G.fast that is capable of delivering speeds near to a gigabit, but for only short distances up to a few hundred feet. The manufacturers hoped the technology would be used to build a fiber-to-the curb network, but that economic model never made much sense. However, G.fast is finally seeing use as a way to distribute high bandwidth inside apartment buildings or larger businesses using the existing telephone copper without having to rewire a building.

Several companies like AdTran and Huawei have continued to improve DSL, and through a technique known as supervectoring have been able to goose speeds as high as 300 Mbps from DSL. The technology achieves improved bandwidth in two ways. First it uses higher frequencies inside the telephone copper. DSL works somewhat like an HFC cable network in that it uses RF technology to create the data transmission waves inside of the captive wiring network. Early generations of DSL used frequencies up to 8 MHz and the newer technologies climb as high as 35 MHz. The supervectoring aspect of the technology comes through techniques that can cancel interference at the higher frequencies.

In the US this new technology is largely without takers. AdTran posted a blog that says that there doesn’t seem to be a US market for faster DSL. That’s going to be news to the millions of homes that are still using slower DSL. The telcos could upgrade speeds to as much as 300 Mbps for a cost of probably no more than a few hundred dollars per customer. This would provide for another decade of robust competition from telephone copper. While 300 Mbps is not as fast as the gigabit speeds now offered by cable companies using DOCSIS 3.1 it’s as fast as the cable broadband still sold to most homes.

This new generation of DSL technology could enable faster broadband to millions of homes. I’ve visiting dozens of small towns in the country, many without a cable competitor where the DSL speeds are still at 6 Mbps speeds or less. The big telcos have milked customers for years to pay for the old DSL and are not willing to invest some of those earnings back into another technology upgrade. To me this is largely due to deregulating the broadband industry because there are no regulators pushing the big telcos to do the right thing. Upgrading would be the right thing because the telcos could retain millions of DSL customers for another decade, so it would be a smart economic decision.

There is not a lot of telephone copper around the globe it was only widely deployed in North America and Europe. In Germany, Deutsche Telekom (DT) is deploying the supervectoring DSL to about 160,000 homes this quarter. The technical press there is lambasting them for not making the leap straight to fiber. DT counters by saying that they can deliver the bandwidth that households need today. The new deployment is driving them to build fiber deeper into neighborhoods and DT expects to then make the transition to all-fiber within a decade. Households willing to buy bandwidth between 100 Mbps and 300 Mbps are not going to care what technology is used to deliver it.

There is one set of companies willing to use the faster DSL in this country. There are still some CLECs who are layering DSL onto telco copper, and I’ve written about several of these CLECs over the last few months. I don’t know any who are specifically ready to use the technology, but I’m sure they’ve all considered it. They are all leery about making any new investments in DSL upgrades since the FCC is considering eliminating the requirement that telcos provide access to the copper wires. This would be a bad regulatory decision since there are companies willing to deliver a faster alternative to cable TV by using the telco copper lines. It’s obvious that none of the big telcos are going to consider the faster DSL and we shouldn’t shut the door on companies willing to make the investments.

Technology Promises

I was talking to one of my buddies the other day and he asked what happened to the promise made fifteen years ago that we’d be able to walk up to vending machines and buy products without having to use cash or a credit card. The promise that this technology was coming was based upon a widespread technology already in use at the time in Japan. Japan has vending machines for everything and Japanese consumers had WiFi-based HandiPhones that were tied into many vending machines.

However, this technology never made it to the US, and in fact largely disappeared in Japan. Everybody there, and here converted to smartphones and the technology that used WiFi phones faded away. As with many technologies, the ability to do something like this requires a whole ecosystem of meshing parts – in this case it requires vending machines able to communicate with the customer device, apps on the consumer device able to make purchases, and a banking system ready to accept the payments. We know that smartphones can be made to do this, and in fact there has been several attempts to do so.

But the other two parts of the ecosystem are problems. First, we’ve never equipped vending machines to be able to communicate using cellular spectrum. The holdup is not the technology, but rather the fear of hacking. In today’s world we are leery about installing unmanned edge devices that are linked to the banking system for fear that such devices can become entry points for hackers. This same fear has throttled the introduction of any new financial technology and is why the US was years behind Europe in implementing the credit card readers that accept chips.

The biggest reason we don’t have cellular vending machines is that the US banking system has never gotten behind the idea of micropayments, which means accepting small cash transactions – for example, charging a nickel every time somebody reads a news article. Much of the online world is begging for a micropayment system, but the banking fee structure is unfriendly to the idea of processing small payments – even if there will be a lot of them. The security and micropayment issues have largely been responsible for the slow rollout of ApplePay and other smartphone cash payment systems.

This is a perfect example of an unfulfilled technology. One of the most common original claims for the benefits of ubiquitous cellular was a cashless society where we could wave our phone to buy things – but the entrenched old-technology banking system effectively squashed the technology, although people still want it.

I look now at the many promises being made for 5G and I already see technology promises that are not likely to be delivered. I have read hundreds of articles that are promising that 5G is going to completely transform our world. It’s supposed tp provide gigabit cellular service that will make landline connections obsolete. It will enable fleets of autonomous vehicles sitting ready to take us anywhere at a moment’s notice. It will provide the way to communicate with hordes of sensors around us that will make us safer and our world smarter.

As somebody who understands the current telecom infrastructure I can’t help but be skeptical about most of these claims. 5G technology can be made to fulfill the many promises – but the ecosystem of all of the components needed to make these things happen will create roadblocks to that future. It would take two pages just to list all of the technological hurdles that must be overcome to deliver ubiquitous gigabit cellular service. But perhaps more importantly, as somebody who understands the money side of the telecom industry, I can’t imagine who is going to pay for these promised innovations. I’ve not seen anybody promising gigabit cellular predicting that monthly cellphone rates will double to pay for the new network. In fact, the industry is instead talking about how the long-range outlook for cellular pricing is a continued drop in prices. It’s hard to imagine a motivation for the cellular companies to invest huge dollars for faster speeds for no additional revenue.

This is not to say that 5G won’t be introduced and that it won’t bring improvements to cellular service. But I believe that a decade from now that if we pull out some of the current articles written about 5G that we’ll see that most of the promised benefits were never delivered. If I’m still writing a blog I can promise this retrospective!

 

p.s – I can’t ignore that sometimes the big technology promises come to pass. Some of you remember the series of AT&T ads that talked about the future. One of my favorite AT&T ads asked the question “Have you ever watched the movie you wanted to the minute you wanted to?”. This ad was from 1993 and promised a future where content would be at our finger tips. That was an amazing prediction for a time when dial-up was still a new industry. Any engineer at that time would have been skeptical about our ability to deliver large bandwidth to everybody – something that is still a work in process. Of course, that same ad also promised video phone booths, a concept that is quaint in a world full of smartphones.

Fiber in Apartment Buildings

For many years a lot of my clients with fiber networks have avoided serving large apartment buildings. There were two primary causes for this. First, there has always been issues with getting access to buildings. Landlords control access to their buildings and some landlords have made it difficult for a competitor to enter their building. I could write several blogs about that issue, but today I want to look at the other historical challenge to serving apartments – the cost of rewiring many apartment buildings has been prohibitive.

There are a number of issues that can make it hard to rewire any apartment. Some older buildings had concrete floors and plaster walls and are hard to drill for wires. A landlord might have restrictions due to aesthetics and not want to see any visible wiring. A landlord might not allow for adequate access to equipment for installations or repairs, particularly after dark. A landlord might not have a safe space for an ISP’s core electronics or have adequate power available.

But assuming that a landlord is willing to allow a fiber overbuilder, and is reasonable about aesthetics and similar issues, many apartment owners now want fiber since their tenants are asking for faster broadband. There are new ways to serve apartments that were not available in the past that can now make it possible to serve apartments in a cost-effective manner.

G.Fast has come of age and the equipment is now affordable and readily available from several vendors. A number of telcos have been using the technology to improve broadband speeds in apartment buildings. The technology works by using frequencies higher than DSL and using existing telephone copper in the building. Copper wire is mostly owned by the landlord, and they can generally grant access to the telephone patch panel to multiple ISPs.

CenturyLink reports speeds over 400 Mbps using G.Fast, enabling a range of broadband products. The typical deployment brings fiber to the telecom communications space in the building, with jumpers made to the copper wire for customers wanting faster broadband. Telcos are reporting that G.Fast offers good broadband up to about 800 feet, which is more than adequate for most apartment building.

Calix now also offers a G.Fast that works over coaxial cable. This is generally harder to use because it’s harder to get access to coaxial home runs to each apartment. Typically an ISP would need to get access to all of the coaxial cable in a building to use this G.Fast variation. But it’s worth investigating since it increases speeds to around 500 Mbps and extends distances to 2,000 feet.

Millimeter Wave Microwave. A number of companies are using millimeter wave radios to deliver bandwidth to apartment buildings. This is not using the 5G standard, but current radios can deliver two gigabits for about one mile or one gigabit for up to two miles. The technology is mostly being deployed in larger cities to avoid the cost of laying urban fiber, but there is no reason it can’t be used in smaller markets where there is line-of-sight from an existing tower to an apartment building. The radios are relatively inexpensive with a pair of them costing less than $5,000.

It’s an interesting model in that the broadband must be extended to customers from the roof top rather than the basement. The typical deployment would run fiber from the rooftop radio, down through risers and extended out to apartment units.

The good news with stringing fiber in apartments is that wiring technology is much improved. There are now several different fiber wiring systems that are easy to install, and which are unobtrusive by hiding fiber along the corners of the ceiling.

Many ISPs are finding that the new wiring systems alone are making it viable to string fiber in buildings that were too expensive a five years ago.   If you’ve been avoiding apartment buildings because they’re too hard to serve you might want to take another look.

Update on ATSC 3.0

A few months ago the FCC authorized the implementation of equipment using the ATSC 3.0 standard. The industry has known this has been coming for several years, which has given TV manufacturers the ability to start designing the standard into antenna and TV sets.

ATSC 3.0 is the first major upgrade to broadcast TV since the transition to digital signals (DTV) in 2009. This is a breakthrough upgrade to TV since it introduces broadband into the TV transmission signal. The standard calls for transforming the whole over-the-air transmission to IP which means that broadcasters will be able to mix in IP-based services with normal TV transmissions. This opens up a whole world of possibilities such as providing reliable 4K video through the air, allowing for video-on demand, providing immersive high-quality audio and greatly improving the broadcast emergency alert system. This also can bring the whole array of digital features that we are used to from streaming services like program guides, actor bios and any other kind of added information a station wants to send to customers.

From an economic standpoint this provides a powerful tool to local TV stations to provide an improved and more competitive product. It does complicate the life of any station that elects to sell advanced services because it puts them into the business of selling products directly to the public. Because the signal is IP, stations can sell advanced packages to customers that can only be accessed through a password, like with online TV services. However, this puts local stations into the retail business where they must be able to take order, collect payments and take calls from customers – something they don’t do today.

It creates an interesting financial dynamic for the TV industry. Today local network stations charge a lot of money for retransmission fees to cable companies for carrying their content. But most of that money passes through the local stations and gets passed up to the major networks like ABC or NBC. ATSC 3.0 is going to allow stations to directly market advanced TV service to customers, and it’s likely that many of these customers will be cord cutters that are lured away from traditional cable due to the advanced ATSC 3.0 services they can buy from their local networks. This puts the local network affiliates directly into competition with their parent networks, and it will be interesting to watch that tug of war.

This also opens up yet one more TV option for customers. FCC rules will still require that anybody with an antenna can receive TV for free over the air. But customers will have an additional option to buy an advanced TV package from the local station. If local stations follow the current industry model they are likely to charge $3 to $5 per month for access to their advanced features, and the jury is still out on how many people are willing to buy numerous channels at that price.

There are other interesting aspects to the new protocol. It allows for more efficient use of the TV spectrum, meaning that TV signals should be stronger and should also penetrate better into buildings. The TV signals also will be available to smartphones equipped with the ATSC 3.0 receiver in its chipset. This could enable a generation of young viewers who only watch local content on their phones. Station owners also have other options. They could license and allow other content to ride along with their signal. We might see local stations that bundle Netflix in with their local content.

We probably aren’t going to see many ATSC 3.0 devices in the market until next year as TV and other device makers build ATSC 3.0 tuners into their hardware. Like anything this new it’s probably going to take four or five years for this to go mainstream.

It’s going to an interesting transition to watch because it gives power back to local stations to compete against cable companies. And it provides yet one more reason why people might choose to cut the cord.

Gluing Fiber

A lot of people asked for my opinion about a recent news article that talks about a new technology that allows gluing fiber directly to streets. The technology comes from a start-up, Traxyl and involves adhering fiber to the streets or other hard surface using a hard resin coating. The company in early trials says the coating can withstand weather, snowplows and a 50-ton excavator. The company is predicting that the coating ought to be good for ten years.

Until I see this over time in real life I don’t have any way to bless or diss the technology. But I have a long list of concerns that would have to be overcome before I’d recommend using it. I’d love to hear other pros and cons from readers.

Surface Cuts. No matter how tough the coating, the fiber will be broken when there is a surface cut in the street. Shallow surface cuts happen a lot more often than cuts to deeper fiber, and even microtrenched trenched fiber at 6 inches is a lot safer. As an example, on my residential street in the last year there have been two water main breaks and one gas leak that caused the city to cut from curb to curb across the whole road surface. I wouldn’t be shocked in a city of this size (90,000 population) if there aren’t a dozen such road cuts somewhere in the city every day. This makes me wonder of the true useful life on the fiber, because that’s a lot of outages to deal with.

I also worry about smaller road disturbances. Anything that breaks the road surface is going to break the fiber. That could be road heaving, tree roots or potholes. I’d hate to lose my fiber every time a pothole formed under it.

Repaving. Modern roads undergo a natural cycle. After initial paving roads are generally repaved every 10-15 years by laying down a new coat of material on top of the existing road. During the preparation for repaving it’s not unusual to lightly groom the road and perhaps scrape off a little bit of the surface. It seems like this process would first cut the fiber in multiple places and would then bury the fiber under a few inches of fresh macadam. I would think the fiber would have to be replaced since there would be no access to the fiber after repaving.

The repaving process is generally done 2 to 4 times during the life of a street until there’s a need for a full new repaving. In repaving the roadbed is excavated to the substrate and any needed repairs made to the substrate before a full new repaving. This process would fully remove the glued fiber from the street (as it would also remove micro-trenched fiber).

Outage time frames. The vendor says that a cut can be mended by either ungluing and fixing the existing wire or else placing a new fiber patch over the cut. That sounds like something that can be done relatively quickly. My concern comes from the nature of road cuts. It’s not usual for a road cut to be in place for several days when there is a major underground problem with gas or water utilities. That means fiber cuts might go days before they can be repaired. Worse, the process of grading and repaving a road might take the fiber out of service for weeks or longer. Customers on streets undergoing repaving might lose broadband for a long time.

Cost. The vendor recognizes many of these issues. One of their suggestions to mitigate the problems would be to lay a fiber on both sides of a street. I see two problems with that strategy. First, it doubles the cost. They estimate a cost of $15,000 per mile and this becomes less attractive at $30,000 per mile. Further, two fibers don’t fix the problem of repaving. It doesn’t even solve road cuts other than halving the number of households served by a given fiber (each fiber serves one side of the street).

I’m also concerned about lifecycle cost. Buried conduit ought to be good for a century or more, and the fiber in those conduits might need to be replaced every 50 – 60 years. Because of street repaving the gluing technology might require new fiber 5 – 7 times in a century, making this technology significantly more expensive in the long run. Adding in the cost of constantly dealing with fiber cuts (and the customer dissatisfaction that comes with outages), this doesn’t sound customer friendly or cost effective.

The article suggests dealing with the fiber cuts by using some sort of a mesh network that I guess would create multiple local rings. This sounds interesting, but there are no fiber electronics that work that way today. If fiber is laid on both sides of the street, then a cut in one fiber knocks out the people on that side of the street. I can’t envision a PON network that could be done any other way.

These are all concerns that would worry me as a network operator. We bury fiber 3-4 feet underground to avoid all of the issues that worry me about fiber at the surface. To be fair, I can think of numerous applications where this could be beneficial. This might be a great way to lay fiber inside buildings. It might make sense to connect buildings in a campus environment. It would alleviate the issues of bringing fiber through federal park land where it’s hard to get permission to dig. It could be used to bring a second path to a customer that demands redundancy. It might even be a good way to get fiber to upper floors of high-rises where the existing fiber ducts are full. But I have a hard time seeing this as a last mile residential network. I could be proven to be wrong, but for now I’m skeptical.

A Deeper Look at 5G

The consulting firm Bain & Company recently looked at the market potential for 5G. They concluded that there is an immediate business case to be made for 5G deployment. They go on to conclude that 5G ‘pessimists’ are wrong. I count myself as a 5G pessimist, but I admit that I look at 5G mostly from the perspective of the ability of 5G to bring better broadband to small towns and rural America. I agree with most of what Bain says, but I take the same facts and am still skeptical.

Bain says that the most immediate use for 5G deployment is in urban areas. They cite an interesting statistic I’ve never seen before that says that it will cost $15,000 – $20,000 to upgrade an existing cell site with 5G, but will cost between $65,000 and $100,000 to deploy a new 5G node. Until the cost for new 5G cell sites comes way down it’s going to be hard for anybody to justify deploying new 5G cell sites except in those places that have potential business to support the high investment cost.

Bain recommends that carriers should deploy 5G quickly in those places where it’s affordable in order to be the first to market with the new technology. Bain also recommends that cellular carriers take advantage of improved mobile performance, but also look at hard at the fixed 5G opportunities to deliver last mile broadband. They say that an operator that maximizes both opportunities should be able to see a fast payback.

A 5G network deployed on existing cell towers is going to create small circles of prospective residential broadband customers – and that circle isn’t going to be very big. Delivering significant broadband would mean small circles delivering broadband for 1000 to 1,500 feet from a transmitter. Cell towers today are much farther apart than those distances, and this means a 5G delivery map consisting of scattered small circles.

There are not many carriers willing to tackle that business plan. It means selectively marketing only to those households within range of a 5G cell site. AT&T is the only major ISP that already uses this business plan. AT&T currently offers fiber to any homes or businesses close to their numerous fiber nodes. They could use that same sales plan to sell fixed broadband to customers close to each 5G cell site. However, AT&T has said that, at least for now, they don’t see a business case for 5G similar to their fiber roll-out.

Verizon could do this, but they have been walking away from a lot of their residential broadband opportunities, going so far as to sell a lot of their fiber FiOS customers to Frontier. Vericaon says they will deploy 5G in several cities starting next year but has never talked about the number of potential households they might cover. This would require a major product roll-out for T-Mobile or Sprint, but in the document they filed with FCC to support their merger they said they would tackle this market. Both companies currently don’t have the fleet of needed technicians or the backoffice ready to support the fixed residential broadband market.

The report skims past the the question of the availability of 5G technology. Like any new technology the first few generations of field equipment are going to have problems. Most players in the industry have learned the lesson of not widely deploying any new technology until it’s well-proven in the field. Verizon says their early field trials have gone well and we’ll have to wait until next year to see how 5G they are ready to deploy with first generation technology.

Bain also says there should be no ‘surge’ in capital expenditures if companies deploy 5G wisely – but the reverse is also true, and bringing 5G small cells to places without current fiber is going to be capital intensive. I agree with Bain that, technology concerns aside, that the only place where 5G makes sense for the next few years is urban areas and mostly on existing cell sites.

I remain a pessimist of 5G being feasible in more rural areas. The cost of the electronics will need to drop to a fraction of today’s cost. There are going to always be pole issues for deploying smaller cells in rural America – even should regulators streamline the hanging of small cell sites, those 5G devices can’t be placed onto the short poles we often see in rural America. While small circles of broadband delivery might support an urban business model, the low density in rural America might never make economic sense.

I certainly could be wrong, but I don’t see any companies sinking huge amounts of money into 5G deployments until the technology has been field-proven and until the cost of the technology drops and stabilizes. I hope I am proven wrong and that somebody eventually finds a version of the technology that will benefit rural America – but I’m not going to believe it until I can kick the tires.