Verizon’s Residential 5G Broadband

We finally got a look at the detail of Verizon’s 5G residential wireless product. They’ve announced that it will be available to some customers in Houston, Indianapolis, Los Angeles and Sacramento starting on October 1.

Verizon promises average download data speeds of around 300 Mbps. Verizon has been touting a gigabit wireless product for the last year, but the realities of wireless in the wild seems to have made that unrealistic. However, 300 Mbps is a competitive broadband product and in many markets Verizon will become the fastest alternative competitor to the cable companies. As we’ve seen everywhere across the country, a decent competitor to the big cable companies is almost assured of a 20% or higher market penetration just for showing up.

The product will be $50 per month for customers who use Verizon wireless and $70 for those that don’t. These prices will supposedly include all taxes, fees and equipment – although it’s possible that there are add-ons like using a Verizon WiFi router. That pricing is going to be attractive to anybody that already has Verizon cellular – and I’m sure the company is hoping to use this to attract more cellular customers. This is the kind of bundle that can make cellular stickier and is exactly what the Comcast and Charter have in mind as they are also offering cellular. Verizon is offering marketing inducements for the roll-out and are offering 3 months free of YouTube TV or else a free Apple TV 4K or a Google Chromecast Ultra.

Theoretically this should set off a bit of a price war in cities where Comcast and Charter are the incumbent cable providers. It wouldn’t be hard for those companies to meet or beat the Verizon offer since they are already selling cellular at a discount. We’re going to get a fresh look at oligopoly competition – will the cable companies really battle it out? The cable companies have to be worried about losing significant market share in major urban markets.

We’re also going to have to wait a while to see the extent of the Verizon coverage areas. I’ve been speculating about this for a while and I suspect that Verizon is going to continue with their history of being conservative and disciplined. They will deploy 5G where there is fiber that can affordably support it – but they are unlikely to undertake any expensive fiber builds just for this product. Their recently announced ‘One Fiber’ policy says just that – the company wants to capitalize on the huge amount of network that they have already constructed for other purposes. This means it’s likely in any given market that coverage will depend upon a customer’s closeness to Verizon fiber.

There is one twist to this deployment that means Verizon might not be in a hurry to deploy this too quickly. The company has been working with Ericsson, Qualcomm, Intel and Samsung to create proprietary equipment based upon the 5GTF standard. But the rest of the industry has adopted the 3GPP standard for 5G and Verizon admits it will have to replace any equipment installed with their current standard.

Verizon also said over the last year that they wanted this to be self-installed by customers. At least for now the installations are going to require a truck roll, which will add to the cost and the rate of deployment of the new technology.

Interestingly, these first markets are outside of Verizon’s telco footprint. This means that Verizon will not only be taking on cable companies, but that they might be putting the final nail in the coffin of DSL offered by AT&T and other telcos in the new markets. Verizon is unlikely to roll this out to compete with their own FiOS product unless deployments are incredibly inexpensive. But this might finally bring a Verizon broadband product to neighborhoods in the northeast that never got FiOS.

It’s going to be a while under we understand the costs of this deployment. Verizon has been mum about the specific network elements and reliance on fiber needed to support the product. And they have been even quieter about the all-in cost of deployment.

Cities all over the country are going to get excited about this deployment in the hope of getting a second competitor to their cable company which are often a near-monopoly. It appears that the product is going to work best where there is already a fiber-rich environment. Most urban areas, while having little last mile-fiber, are crisscrossed with fiber used to get to large businesses, governments, schools, etc.

The same is not necessarily the same in suburbs and definitely not true of smaller communities and rural America. The technology depends upon local last-mile fiber backhaul. Verizon says that they believe their potential market will be to eventually pass 30 million households, or a little less than 25% of the US market. I’d have to think that the map for others, except perhaps for AT&T largely coincide with the Verizon map. It seems that Verizon wants to be the first to market to potentially dissuade other entrants. We’ll have to wait and see if a market can reasonably support more than one last-mile 5G provider – because companies like T-Mobile also have plans for wide deployment.

More Crowding in the OTT Market

It seems like I’ve been seeing news almost weekly about new online video providers. This will put even more pressure on cable companies as more people find an online programming option to suit them. This also means that a likely shakeout of the OTT industry with such a crowded field of competitors all vying for the same pool of cord-cutters.

NewTV. This is an interesting new OTT venture that was founded by Jeffrey Katzenberg, former chairman of Walt Disney and headed by Meg Whitman, former CEO of Hewlett Packard Enterprise and also from Disney. The company has raised $1 billion in and has support from every major Hollywood studio including 21st Century Fox, Disney, NBCUniversal, Sony Pictures Entertainment, and Viacom.

Rather than take on Netflix and other OTT content directly the company plans to develop short 10-minute shows aimed exclusively at cellphone users. They plan both free content supported by advertising and a subscription plan that would use the ‘advertising-light’ option used by Hulu.

AT&T already owns a successful OTT product with HBO Now that has over 5 million customers. John Stankey, the head of WarnerMedia says the plan is to create additional bundles of content centered around HBO that bring in other WarnerMedia content and selected external content. He admits that HBO alone does not represent enough content to be a full-scale OTT alternative for customers.

AT&T’s goal is to take advantage of HBO’s current reputation and to position their content in the market as premium and high quality as a way to differentiate themselves from other OTT providers.

Apple has been talking about getting into the content business for a decade, and they have finally pulled the trigger. The company invested $1 billion this year and now has 24 original series in production as the beginning of a new content platform. Among the new shows is a series about a morning TV show starring Reese Witherspoon and Jennifer Aniston.

The company hired Jamie Erlicht and Zack Van Amburg from Sony Pictures Television to operate the new business and has since hired other experienced television executives. They also are working on other new content and just signed a multiyear deal with Oprah Winfrey. The company has not announced any specific plans for airing and using the new content, but that will be coming soon since the first new series will probably be ready by March of 2019.

T-Mobile. As part of the proposed merger with Sprint, T-Mobile says they plan to launch a new ‘wireless first’ TV platform that will deliver 4K video using its cellular platform. On January T-Mobile purchased Layer3 which has been offering a 275 channel HD line-up in a few major markets.

The T-Mobile offering will be different than other OTT in that the company is shooting for what they call the quad play that bundles video, in-home broadband (delivered using cellular frequency), mobile broadband and voice. The company says that the content will only be made available to T-Mobile customers and they view it as a way to reduce churn and gain cellular market share.

The Layer 3 subsidiary will also continue to pursue partnerships to gain access to customers through fiber networks, such as the arrangement they currently have with the municipal fiber network in Longmont, Colorado.

Disney. Earlier this year the company announced the creation of a direct-to-consumer video service based upon the company’s huge library of popular content. Disney gained the needed technology by purchasing BAMTech, the company that supports Major League Baseball online. Disney also is bolstering its content portfolio through the purchase of Twenty-First Century Fox.

Disney plans to launch an ESPN-based sports bundle in early 2019. They have not announced specific plans on how and when to launch the rest of their content, but they canceled an agreement with Netflix for carrying Disney content.

The Continued Growth of Data Traffic

Every one of my clients continues to see explosive growth of data traffic on their broadband networks. For several years I’ve been citing a statistic used for many years by Cisco that says that household use of data has doubled every three years since 1980. In Cisco’s last Visual Networking Index published in 2017 the company predicted a slight slowdown in data growth to now double about every 3.5 years.

I searched the web for other predictions of data growth and found a report published by Seagate, also in 2017, titled Data Age 2025: The Evolution of Data to Life-Critical. This report was authored for Seagate by the consulting firm IDC.

The IDC report predicts that annual worldwide web data will grow from the 16 zettabytes of data used in 2016 to 163 zettabytes in 2025 – a tenfold increase in nine years. A zettabyte is a mind-numbingly large number that equals a trillion gigabytes. That increase means an annual compounded growth rate of 29.5%, which more than doubles web traffic every three years.

The most recent burst of overall data growth has come from the migration of video online. IDC expects online video to keep growing rapidly, but also foresees a number of other web uses that are going to increase data traffic by 2025. These include:

  • The continued evolution of data from business background to “life-critical”. IDC predicts that as much as 20% of all future data will become life-critical, meaning it will directly impact our daily lives, with nearly half of that data being hypercritical. As an example, they mention the example of how a computer crash today might cause us to lose a spreadsheet, but that data used to communicate with a self-driving car must be delivered accurately. They believe that the software needed to ensure such accuracy will vastly increase the volume of traffic on the web.
  • The proliferation of embedded systems and the IoT. Today most IoT devices generate tiny amounts of data. The big growth in IoT data will not come directly from the IoT devices and sensors in the world, but from the background systems that interpret this data and make it instantly usable.
  • The increasing use of mobile and real-time data. Again, using the self-driving car as an example, IDC predicts that more than 25% of data will be required in real-time, and the systems necessary to deliver real-time data will explode usage on networks.
  • Data usage from cognitive computing and artificial intelligence systems. IDC predicts that data generated by cognitive systems – machine learning, natural language processing and artificial intelligence – will generate more than 5 zettabytes by 2025.
  • Security systems. As we have more critical data being transmitted, the security systems needed to protect the data will generate big volumes of additional web traffic.

Interestingly, this predicted growth all comes from machine-to-machine communications that are a result of us moving more daily functions onto the web. Computers will be working in the background exchanging and interpreting data to support activities such as traveling in a self-driving car or chatting with somebody in another country using a real-time interpreter. We are already seeing the beginning stages of numerous technologies that will require big real time data.

Data growth of this magnitude is going to require our data networks to grow in capacity. I don’t know of any client network that is ready to handle a ten-fold increase in data traffic, and carriers will have to beef up backbone networks significantly over time. I have often seen clients invest in new backbone electronics that they hoped to be good for a decade, only to find the upgraded networks swamped within only a few years. It’s hard for network engineers and CEOs to fully grasp the impact of continued rapid data growth on our networks and it’s more common than not to underestimate future traffic growth.

This kind of data growth will also increase the pressure for faster end-user data speeds and more robust last-mile networks. If a rural 10 Mbps DSL line feels slow today, imagine how slow that will feel when urban connections are far faster than today. If the trends IDC foresees hold true, by 2025 there will be many homes needing and using gigabit connections. It’s common, even in the industry to scoff at the usefulness of residential gigabit connections, but when our use of data needs keeps doubling it’s inevitable that we will need gigabit speeds and beyond.

Going Wireless-only for Broadband

According to New Street Research (NSR), up to 14% of homes in the US could go all-wireless for broadband. They estimate that there are 17 million homes which are small enough users of bandwidth to justify satisfying their broadband needs strictly using a cellular connection. NSR says that only about 6.6 million homes have elected to go all-wireless today, meaning there is a sizable gap of around 10 million more homes for which wireless might be a reasonable alternative.

The number of households that are going wireless-only has been growing. Surveys by Nielsen and others have shown that the trend to go wireless-only is driven mostly by economics, helped by the ability of many people to satisfy their broadband demands using WiFi at work, school or other public places.

NSR also predicts that the number of homes that can benefit by going wireless-only will continue to shrink. They estimate that only 14 million homes will benefit by going all-wireless within five years – with the decrease due to the growing demand of households for more broadband.

There are factors that make going wireless an attractive alternative for those that don’t use much broadband. Cellular data speeds have been getting faster as cellular carriers continue to implement full 4G technology. The first fully compliant 4G cell site was activated in 2017 and full 4G is now being deployed in many urban locations. As speeds get faster it becomes easier to justify using a cellphone for broadband.

Of course, cellular data speeds need to be put into context. A good 4G connection might be in the range of 15 Mbps. That speed feels glacial when compared to the latest speeds offered by cable companies. Both Comcast and Charter are in the process of increasing data speeds for their basic product to between 100 Mbps and 200 Mbps depending upon the market. Cellphones also tend to have sluggish operating systems that are tailored for video and that can make regular web viewing feel slow and clunky.

Cellular data speeds will continue to improve as we see the slow introduction of 5G into the cellular network. The 5G specification calls for cellular data speeds of 100 Mbps download when 5G is fully implemented. That transition is likely to take another decade, and even when implemented isn’t going to mean fast cellular speeds everywhere. The only way to achieve 100 Mbps speeds is by combining multiple spectrum paths to a given cellphone user, probably from multiple cell sites. Most of the country, including most urban and suburban neighborhoods are not going to be saturated with multiple small cell sites – the cellular companies are going to deploy faster cellular speeds in areas that justify the expenditure. The major cellular providers have all said that they will be relying on 4G LTE cellular for a long time to come.

One of the factors that is making it easier to go wireless-only is that people have access throughout the day to WiFi, which is powered from landline broadband. Most teenagers would claim that they use their cellphones for data, but most of them have access to WiFi at home and school and at other places they frequent.

The number one factor that drives people to go all-wireless for data is price. Home broadband is expensive by the time you add up all of the fees from a cable company. Since most people in the country already has a cellphone then dropping the home broadband connection is a good way for the budget-conscious to control their expenses.

The wireless carriers are also making it easier to go all wireless by including some level of video programming with some cellular plans. These are known as zero-rating plans that let a customer watch some video for free outside of their data usage plan. T-Mobile has had these plans for a few years and they are now becoming widely available on many cellular plans throughout the industry.

The monthly data caps on most wireless plans are getting larger. For the careful shopper who lives in an urban area there are usually a handful of truly unlimited data plans. Users have learned, though, that many such plans heavily restrict tethering to laptops and other devices. But data caps have creeped higher across-the-board in the industry compared to a few years ago. Users who are willing to pay more for data can now buy the supposedly unlimited data plans from the major carriers that are actually capped between 20 – 25 GB per month.

There are always other factors to consider like cellular coverage. I happen to live in a hilly wooded town where coverage for all of the carriers varies block by block. There are so many dead spots in my town that it’s challenging to use cellular even for voice calls. I happen to ride Uber a lot and it’s frustrating to see Uber drivers get close to my neighborhood and get lost when they lose their Verizon signal. This city would be a hard place to rely only on a cellphone. Rural America has the same problem and regardless of the coverage maps published by the cellular companies there are still huge areas where rural cellular coverage is spotty or non-existent.

Another factor that makes it harder to go all-wireless is working from home. Cellphones are not always adequate when trying to log onto corporate WANs or for downloading and working on documents, spreadsheets and PowerPoints. While tethering to a computer can solve this problem, it doesn’t take a lot of working from home to surpass the data caps on most cellular plans.

I’ve seen a number of articles in the last few years talking claiming that the future is wireless and that we eventually won’t need landline broadband. This claim ignores the fact that the amount of data demanded by the average household is doubling every three years. The average home uses ten times or more data on their landline connection today than on their cellphones. It’s hard to foresee the cellphone networks able to close that gap when the amount of landline data use keeps growing so rapidly.

Upgrading Broadband Speeds

A few weeks ago Charter increased my home broadband speeds from 60 Mbps to 130 Mbps with no change in price. My upload speed seems to be unchanged at 10 Mbps. Comcast is in the process of speed upgrades and is increasing base speeds to between 100 Mbps and 200 Mbps download speeds in various markets.

I find it interesting that while the FCC is having discussions about keeping the definition of broadband at 25 Mbps that the big cable companies – these two alone have over 55 million broadband customers – are unilaterally increasing broadband speeds.

These companies aren’t doing this out of the goodness of their hearts, but for business reasons. First, I imagine that this is a push to sharpen the contrast with DSL. There are a number of urban markets where customers can buy 50 Mbps DSL from AT&T and others and this upgrade opens up a clear speed difference between cable broadband and DSL.

However, I think the main reason they are increasing speeds is to keep customers happy. This change was done quietly, so I suspect that most people had no idea that the change was coming. I also suspect that most people don’t regularly do speed tests and won’t know about the speed increase – but many of them will notice better performance.

One of the biggest home broadband issues is inadequate WiFi, with out-of-date routers or poor router placement degrading broadband performance. Pushing faster speeds into the house can overcome some of these WiFi issues.

This should be a wake-up call to everybody else in the industry to raise their speeds. There are ISPs and overbuilders all across the country competing against the giant cable companies and they need to immediately upgrade speeds or lose the public relations battle in the market place. Even those who are not competing against these companies need to take heed, because any web search is going to show consumers that 100 Mbps broadband or greater is now the new standard.

These unilateral changes make a mockery of the FCC. It’s ridiculous to be having discussions about setting the definition of broadband at 25 Mbps when the two biggest ISPs in the country have base product speeds 5 to 8 times faster than that. States with broadband grant programs also have the speed conversation and this will hopefully alert them that the new goal for broadband needs to be at least 100 Mbps.

These speed increases were inevitable. We’ve known for decades that the home demand for broadband has been doubling every three years. When the FCC first started talking about 25 Mbps as the definition of acceptable broadband, the math said that within six years we’d be having the same discussion about 100 Mbps broadband – and here we are having that discussion.

The FCC doesn’t want to recognize the speed realities in the world because they are required by law to try to bring rural speeds to be par with urban speeds. But this can’t be ignored because these speed increases are not just for bragging rights. We know that consumers find ways to fill faster data pipes. Just two years ago I saw articles wondering if there was going to be any market for 4K video. Today, that’s the first thing offered to me on both Amazon Prime and Netflix. They shoot all new programming in 4K and offer it at the top of their menus. It’s been reported that at the next CES electronics shows there will be several companies pushing commercially available 8K televisions. This technology is going to require a broadband connection between 60 Mbps and 100 Mbps depending upon the level of screen action. People are going to buy these sets and then demand programming to use them – and somebody will create the programming.

8K video is not the end game. Numerous companies are working on virtual presence where we will finally be able to converse with a hologram of somebody as if they were in the same room. Early versions of this technology, which ought to be available soon will probably use the same range of bandwidth as 8K video, but I’ve been reading about near-future technologies that will produce realistic holograms and that might require as much as a 700 Mbps connection – perhaps the first real need for gigabit broadband.

While improving urban data speeds is great, every increase in urban broadband speeds highlights the poor condition of rural broadband. While urban homes are getting 130 – 200 Mbps for decent prices there are still millions of homes with either no broadband or with broadband at speeds of 10 Mbps or less. The gap between urban and rural broadband is growing wider every year.

If you’ve been reading this blog you know I don’t say a lot of good things about the big cable companies. But kudos to Comcast and Charter for unilaterally increasing broadband speeds. Their actions speak louder than anything that we can expect out of the FCC.

Subsidizing Rural Broadband

In a rare joint undertaking involving the big and small telcos, the trade groups USTelecom and NTCA—The Rural Broadband Association sponsored a whitepaper titled, Rural Broadband Economics: A Review of Rural Subsidies.

The paper describes why it’s expensive to build broadband networks in rural areas, with high costs mostly driven by low customer density. This is something that is largely universally understood, but this describes the situation for politicians and others who might not be familiar with our industry.

The paper goes on to describe how other kinds of public infrastructure – such roads, electric grids, water and natural gas systems – deal with the higher costs in rural areas. Both natural gas and water systems share the same characteristics as cable TV networks in this country and they are rarely constructed in rural areas. Rural customers must use alternatives like wells for water or propane instead of natural gas.

The electric grid is the most analogous to the historic telephone network in the country. The government decided that everybody should be connected to the electric grid, and various kinds of government subsidies have been used to help pay for rural electric systems. Where the bigger commercial companies wouldn’t build a number of rural electric cooperatives and municipal electric companies filled the gap. The federal government developed subsidy programs, such as low-cost loans to help construct and maintain the rural electric grids. There was no attempt to create universal electric rates across the country and areas lucky enough to have hydroelectric power have electric rates that are significantly lower than regions with more expensive methods of power generation.

Roads are the ultimate example of government subsidies for infrastructure. There are both federal and state fuel taxes used to fund roads. Since most drivers live in urban areas, their fuel taxes heavily subsidize rural roads.

The paper explains that there are only a few alternatives to fund rural infrastructure:

  • Charge higher rates to account for the higher costs of operating in rural areas. This is why small town water rates are often higher than rates in larger towns in the same region.
  • Don’t build the infrastructure since it’s too expensive. This is seen everywhere when cable TV networks, natural gas distribution and water and sewer systems are rarely built outside of towns.
  • Finally, rural infrastructure can be built using subsidies of some kind.

Subsidies can come from several different sources:

  • Cross-subsidies within the same firm. For example, telephone regulators long ago accepted the idea that businesses rates should be set higher to subsidize residential rates.
  • Cross subsidies between firms. An example would be access rates charged to long distance carriers that were used for many years to subsidize local telephone companies. There are also a number of electric companies that have subsidized the creation of broadband networks using profits from the electric business.
  • Philanthropic donations. This happens to a small extent. For example, I recently heard where Microsoft had contributed money to help build fiber to a small town.
  • Government subsidies. There have been a wide range of these in the telecom industry, with the latest big ones being the CAF II grants that contribute towards building rural broadband.

Interestingly the paper doesn’t draw many strong conclusions other than to say that rural broadband will require government subsidies of some kind. It concludes that other kinds of subsidies are not reasonably available.

I suspect there are no policy recommendations in the paper because the small and large companies probably have a different vision of rural broadband subsidies. This paper is more generic and serves to define how subsidies function and to compare broadband subsidies to other kinds of infrastructure.

Verizon’s Case for 5G, Part 4

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. This last blog in the series looks at Verizon’s claim that they are going to use 5G to offer residential broadband. The company has tested the technology over the last year and announced plans to soon introduce the technology into a number of cities.

I’ve been reading everything I can about Verizon and I think I finally figured out what they are up to. They have been saying that within a few years that they will make fixed 5G broadband available to millions of homes. One of the first cities they will be building is Sacramento. It’s clear that in order to offer fast speeds that each 5G transmitter will have to be fiber fed. To cover all neighborhoods in Sacramento would require building a lot of new fiber. Building new fiber is both expensive and time-consuming. And it’s still a head scratcher about how this might work in neighborhoods without poles where other utilities are underground.

Last week I read of an announcement by Lee Hick’s of Verizon for a new initiative called One Fiber. Like many large telecoms Verizon has numerous divisions that own fiber assets like the FiOS group, the wireless group and the old MCI business CLEC group. The new policy will consolidate all of this fiber under into a centralized system, making existing and new fiber available to every part of the business. It might be hard for people to believe, but within Verizon each of these groups managed their own fiber separately. Anybody who has ever worked with the big telcos understands what a colossal undertaking it will be to consolidate this.

Sharing existing fiber and new fiber builds among its various business units is the change that will unleash the potential for 5G deployment. My guess is that Verizon has eyed AT&T’s fiber the strategy and is copying the best parts of it. AT&T has quietly been extending its fiber-to-the-premise (FTTP) network by extending fiber for short distances around the numerous existing fiber nodes in the AT&T network. A node on an AT&T fiber built to get to a cell tower or to a school is now also a candidate to function as a network node for FTTP. Using existing fiber wisely has allowed AT&T to claim they will soon be reaching over 12 million premises with fiber – without having to build a huge amount of new fiber.

Verizon’s One Fiber policy will enable them to emulate AT&T. Where AT&T has elected to build GPON fiber-to-the-premise, Verizon is going to try 5G wireless. They’ll deploy 5G cell sites at their existing fiber nodes where it makes financial sense. Verizon doesn’t have as extensive of a fiber network as AT&T and I’ve seen a few speculations that they might pass as many as 7 million premises with 5G within five years.

Verizon has been making claims about 5G that it can deliver gigabit speeds out to 3,000 feet. It might be able to do that in ideal conditions, but their technology is proprietary and nobody knows the real capabilities. One thing we know about all wireless technologies is that it’s temperamental and varies a lot by local conditions. The whole industry is waiting to the speeds and distances Verizon will really achieve with the first generation gear.

The company certainly has some work in front of it to pursue this philosophy. Not all fiber is the same and their existing fiber network probably has fibers of many sizes, ages and conditions using a wide range of electronics. After inventorying and consolidating control over the fiber they will have to upgrade electronics and backbone networks to enable the kind of bandwidth needed for 5G.

The Verizon 5G network is likely to consist of a series of cell sites serving small neighborhood circles – the size of the circle depending upon topography. This means the Verizon networks will  not likely be ubiquitous in big cities – they will reach out to whatever is in range of 5G cell sites placed on existing Verizon fiber. After the initial deployment, which is likely to take a number of years, the company will have to assess if building additional fiber makes economic sense. That determination will consider all of the Verizon departments and not just 5G.

I expect the company to follow the same philosophy they did when they built FiOS. They were disciplined and only built in places that met certain cost criteria. This resulted in a network that, even today, bring fiber to one block but not the one next door. FiOS fiber was largely built where Verizon could overlash fiber onto their telephone wires or drag fiber through existing conduits – I expect their 5G expansion to be just as disciplined.

The whole industry is dying to see what Verizon can really deliver with 5G in the wild. Even if it’s 100 Mbps broadband they will be a competitive alternative to the cable companies. If they can really deliver gigabit speeds to entire neighborhoods then will have shaken the industry. But in the end, if they stick to the One Fiber model and only deploy 5G where it’s affordable they will be bringing a broadband alternative to those that happen to live near their fiber nodes – and that will mean passing millions of homes and tens of millions.

Telecom Containers

There is a new lingo being used by the large telecom companies that will be foreign to the rest of the industry – containers. In the simplest definition, a container is a relatively small set of software that performs one function. The big carriers are migrating to software systems that use containers for several reasons, the primary being the migration to software defined networks.

A good example of a container is a software application for a cellular company that can communicate with the sensors used in crop farming. The cellular carrier would install this particular container in cell sites where there is a need to communicate with field sensors but would not install the container at the many cell sites where such communications isn’t needed.

The advantage to the cellular carrier is that they have simplified their software deployment. A rural cell site will have a different set of containers than a small cell site deployed near a tourist destination or a cell site deployed in a busy urban business district.

The benefits of this are easy to understand. Consider the software that operates our PCs. The PC manufacturers fill the machine up with every applications a user might ever want. However, most of us use perhaps 10% of the applications that are pre-installed on our computer. The downside to having so many software components is that it takes a long time to upgrade the software on a PC – my iMac laptop has taken an hour at times to compile a new operating system update.

In a software defined network, the ideal configuration is to move as much of the software as possible to the edge devices – in this particular example, to the cell site. Today every cell site much hold and process all of the software needed by any cell site anywhere. That’s both costly, in terms of computing power needed at the cell site as well as inefficient, in that the cell site are running applications that will never be used. In a containerized network each cell site will run only the modules needed locally.

The cellular carrier can make an update to the farm sensor container without interfering with the other software at a cell site. That adds safety – if something goes wrong with that update, only the farm sensor network will experience a problem instead of possibly pulling down the whole network of cell sites. One of the biggest fears of operating a software defined network is that an upgrade that goes wrong could pull down the entire network. Upgrades made to specific containers are much safer, from a network engineering perspective, and if something goes wrong in an upgrade the cellular carrier can quickly revert to the back-up for the specific container to reestablish service.

The migration to containers makes sense for a big telecom carrier. Each carrier can develop unique containers that defines their specific product set. In the past most carriers bought off-the-shelf applications like voice mail – but with containers they can more easily customize products to operate as they wish.

Like most things that are good for the big carriers, there is a long-term danger from containers for the rest of us. Over time the big carriers will develop their own containers and processes that are unique to them. They’ll create much of this software in-house and the container software won’t be made available to others. This means that the big companies can offer products and features that won’t be readily available to smaller carriers.

In the past the products and features available to smaller ISPs are due to product research done by telecom vendors for the big ISPs. Vendors developed software for cellular switches, voice switches, routers, settop boxes, ONTs and all of the hardware used in the industry. Vendors could justify spending money on software development due to expected sales to the large ISPs. However, as the ISPs migrate to a world where they buy empty boxes and develop their own container software there won’t be a financial incentive for the hardware vendors to put effort into software applications. Companies like Cisco are already adapting to this change and it’s going to trickle through the whole industry over the next few years.

This is just one more thing that will make it a little harder in future years to compete with the big ISPs. Perhaps smaller ISPs can band together somehow and develop their own product software, but it’s another industry trend that will give the big ISPs an advantage over the rest of us.

Verizon’s Case for 5G, Part 3

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. In this blog I want to examine the two claims based upon improved latency – gaming and stock trading.

The 5G specification sets a goal of zero latency for the connection from the wireless device to the cellular tower. We’ll have to wait to see if that can be achieved, but obviously the many engineers that worked on the 5G specification think it’s possible. It makes sense from a physics perspective – a connection of a radio signal through air travels for all practical purposes at the speed of light (there is a miniscule amount of slowing from interaction with air molecules). This makes a signal through the air slightly faster than one through fiber since light slows down when passing through fiberglass by 0.83 milliseconds for every hundred miles of fiber optic cable traversed.

This means that a 5G signal will have a slight latency advantage over FTTP – for the first few connection from a customer. However, a 5G wireless signal almost immediately hits a fiber network at a tower or small cell site in a neighborhood, and from that point forward the 5G signal experiences the same latency as an all-fiber connection.

Most of the latency in a fiber network comes from devices that process the data – routers, switches and repeaters. Each such device in a network adds some delay to the signal – and that starts with the first device, be it a cellphone or a computer. In practical terms, when comparing 5G and FTTP the network with the fewest hops and fewest devices between a customer and the internet will have the lowest latency – a 5G network might or might not be faster than an FTTP network in the same neighborhood.

5G does have a latency advantage over non-fiber technologies, but it ought to be about the same advantage enjoyed by FTTP network. Most FTTP networks have latency in the 10-millisecond range (one hundredth of a second). Cable HFC networks have latency in the range of 25-30 ms; DSL latency ranges from 40-70 ms; satellite broadband connections from 100-500 ms.

Verizon’s claim for improving the gaming or stock trading connection also implies that the 5G network will have superior overall performance. That brings in another factor which we generally call jitter. Jitter is the overall interference in a network that is caused by congestion. Any network can have high or low jitter depending upon the amount of traffic the operator is trying to shove through it. A network that is oversubscribed with too many end users will have higher jitter and will slow down – this is true for all technologies. I’ve had clients with first generation BPON fiber networks that had huge amounts of jitter before they upgraded to new FTTP technology, so fiber (or 5G) alone doesn’t mean superior performance.

The bottom line is that a 5G network might or might not have an overall advantage compared to a fiber network in the same neighborhood. The 5G network might have a slight advantage on the first connection from the end user, but that also assumes that cellphones are more efficient than PCs. From that point forward, the network with the fewest hops to the Internet as well the network with the least amount of congestion will be faster – and that will be case by case, neighborhood by neighborhood when comparing 5G and FTTP.

Verizon is claiming that the improved latency will improve gaming and stock trading. That’s certainly true where 5G competes against a cable company network. But any trader that really cares about making a trade a millisecond faster is already going to be on a fiber connection, and probably one that sits close to a major internet POP. Such traders are engaging in computerized trading where a person is not intervening in the trade decision. For any stock trades that involve humans, a extra few thousandths of a second in executing a trade is irrelevant since the human decision process is far slower than that (for someone like me these decisions can be measured in weeks!).

Gaming is more interesting. I see Verizon’s advantage for gaming in making game devices mobile. If 5G broadband is affordable (not a given) then a 5G connection allows a game box to be used anywhere there is power. I think that will be a huge hit with the mostly-younger gaming community. And, since most homes buy broadband from the cable company, lower latency with 5G ought to be to a gamer using a cable network, assuming the 5G network has adequate upload speeds and low jitter. Gamers who want a fiber-like experience will likely pony up for a 5G gaming connection if it’s priced right.

Consumers Love Independent ISPs

For the second time in three years the municipally owned and operated ISP in Chattanooga got the highest ranking in the annual Consumer Reports survey about ISPs. They were the only ISP in the survey that received a positive ranking for value. This is a testament to the fact that consumers love independent ISPs compared to the big ISPs like Comcast, Charter, AT&T and Verizon.

Chattanooga’s EPB makes it into the ranking due to their size, but there are numerous other small ISPs offering an alternative to the big companies. There are about 150 other municipal ISPs around the country providing residential ISP service and many more serving their local business communities. There are numerous cooperatives that provide broadband – many of these are historically telecom cooperatives with the ranks recently growing as electric cooperatives become ISPs. There are hundreds of independent telephone companies serving smaller markets. There is also a growing industry of small commercial ISPs who are building fiber or rural wireless networks.

As somebody who works with small ISPs every day it’s not hard to understand why consumers love them.

  • Real customer service. People dread having to call the big ISPs. They know when they call that the person that answers the phone will be reading from a script and that every call turns into a sales pitch. It’s the negative customer service experience that drives consumers to rank big ISPs at the bottom among all other corporations. Small ISPs tend to have genuine, non-scripted service reps that can accurately answer questions and instantly resolve issues.
  • Transparent pricing. Most big ISPs have raised rates significantly in recent years by the introduction of hidden fees and charges. People find it annoying when they see broadband advertised in their market as costing $49.99 when they know they are paying far more than that. Smaller ISPs mostly bill what they advertise and don’t disguise their actual prices. I’m sure that’s one of the reasons that consumers in Chattanooga feel like they are getting a value for their payment.
  • No negotiations on prices. Big ISPs make customers call every few years and negotiate for lower prices. It’s obvious why they do this because there are many customers who won’t run the gauntlet and end up paying high prices just to avoid making that call. The big ISPs probably think that customers feel like they got a bargain after each negotiation – but customers almost universally hate the process. The ISP triple play and cellular service are the only two common commodities that put consumers through such a dreadful process. Most small ISPs charge their published prices and consumers love the simplicity and honesty.
  • Quality networks. The big ISPs clearly take short cuts on maintenance, and it shows. Big ISPs have more frequent network outages – my broadband connection from Charter goes out for short periods at least a dozen times a week. Small ISPs work hard to have quality networks. Small ISPs do the needed routine maintenance and spend money to create redundancy to limit and shorten outages.
  • Responsive repair times. The big ISPs, particularly in smaller markets can take seemingly forever to fix problems. Most of us now are reliant on broadband in our daily routine and nobody wants to wait a few days to see a repair technician. Most of my small ISP clients won’t end a work day until open customer problems are resolved.
  • Fast resolution of problems. Big ISPs are not good at dealing with things like billing disputes. If a customer can’t resolve something on a first call with a big ISP they have to start all over from the beginning the next time they call. Small ISPs tend to resolve issues quickly and efficiently.
  • Privacy. The big ISPs are all harvesting data from customers to use for advertising and to monetize in general. ISPs, by definition can see most of what we do online. Small ISPs don’t track customers and are not collecting or selling their data.
  • Small ISPs are local. Their staff lives and works in the area and they know where a customer lives when they call. It’s common when calling a big ISP to be talking to somebody in another state or country who has no idea about the local network. Small ISPs keep profits in the community and generally are a big part of local civic life. Big ISPs might swoop in occasionally and give a big check to a local charity, but then disappear again for many years.
  • Big ISPs are really the Devil. Not really, but when I see how people rank the big ISPs – below banks, insurance companies, airlines and even the IRS – I might be onto something!