Quad Bundling

Since Comcast and Charter are now embarking in the cellular business we are soon going to find out if there is any marketing power in a quad bundle. Verizon, and to a smaller degree AT&T, has had the ability to create bundles including cellular service, but they never really pushed this in the marketplace in the way that Comcast is considering.

Comcast has said that the number one reason they are entering the cellular business is to make customers “stickier” and to reduce churn. And that implies offering cellular service cheaper than competitors like Verizon, or to at least create bundles that give the illusion of big savings on cellular. For now, the preliminary pricing Comcast has announced doesn’t seem to be low enough to take the industry by storm. But I expect as they gain customers that the company will find more creative ways to bundle it.

The Comcast pricing announced so far shows only a few options. Comcast is offering a $45 per month ‘unlimited’ cell plan (capped at 20 GB of data per month), that is significantly less expensive than any current unlimited plan from Verizon or AT&T. But this low price is only available now for customers who buy one of the full expensive Comcast triple play bundles. The alternative to this is a $65 per month unlimited plan that is $5 per month lower than the equivalent Verizon plan. Comcast also plans to offer family plans that sell a gigabyte of data for $12 that can be used for any phone in the plan – for many families this might be the best bargain.

One interesting feature of the Comcast plan is that it will automatically offload data traffic to the company’s WiFi network. Comcast has a huge WiFi network with over 16 million hotspots. This includes a few million outdoor hotspots but also a huge network of home WiFi routers that also act as a public hotspot. That means that customers sitting in a restaurant or visiting a home that has a Comcast WiFi connection will automatically use those connections instead of using more expensive cellular data. Depending on where a person lives or works this could significantly lower how much a consumer uses 4G data.

There are still technical issues to be worked out to allow for seamless WiFi-to-WiFi handoffs. Comcast has provided the ability for a few years for customers to connect to their WiFi hotspots. I used to live in a neighborhood that had a lot of the Comcast home hotspots. When walking my dog it was extremely frustrating if I let my cellphone use the Comcast WiFi network because as I went in and out of hotspots my data connections would be interrupted and generally reinitiated. I always had to turn off WiFi when walking to use only cellular data. It will be interesting to see how, and if Comcast has overcome this issue.

A recent survey done by the investment bank Jeffries has to be of concern to the big four cellular companies. In that survey 41% of respondents said that they would be ‘very likely’ to consider a quad play cable bundle that includes cellular. Probably even scarier for the cellular companies was the finding that 76% of respondents who were planning on shopping for a new cell plan within the next year said they would be open to trying a cellular product from a cable company.

I wrote recently about how the cellular business has entered the phase of the business where cellular products are becoming a commodity. Competition between the four cellular companies is already resulting in lower prices and more generous data plans. But when the cable companies enter the fray in all of the major metropolitan areas the competition is going to ratchet up another notch.

The cable companies will be a novelty at first and many customers might give them a try. But it won’t take long for people to think of them as just another cellular provider. One thing that other surveys have shown is that people have a higher expectation for good customer service from a cellular provider than they do for the cable companies. If Comcast is going to retain cellular customers then they are either going to have to make the bundling discounts so enticing that customers can’t afford to leave, or they are going to have to improve their customer service experience.

Even if Comcast and Charter have only modest success with cellular, say a 10% market share, they will hurt the other cellular companies. The number one driver of profits in the cellular business is economy of scale – something you can see by looking at the bottom line of Sprint or T-Mobile compared to Verizon or AT&T. If Comcast is willing to truly use cellular to help hang on to other customers, and if that means they don’t expect huge profits from the product line, then they are probably going to do very well with a quad play product.

And of course, any landline ISP competing against Comcast or Charter has to be wary. If the cellular products work as Comcast hopes then it’s going to mean it will be that much harder to compete against these companies for broadband. Bundled prices have always made it hard for customers to peel away just one product and the cable companies will heavily penalize any customers that want to take only their data product elsewhere.

Broadband Shorts – July 2017

Today I’m going to talk about a few topics that relate to broadband, but that are too short for a separate blog.

Popularity of Telehealth. The Health Industry Distributors Association conducted a follow-up survey of people who had met with a doctor via a broadband connection instead of a live office visit. The survey found that a majority of people were very satisfied with the telehealth visit and 54% said that they thought the experience was better than a live office visit.

Interestingly over half of the telehealth users were under 50 and they preferred telehealth because of the convenience. Many said that once they found their doctor would allow telehealth visits that they requested them whenever possible. Of course, many telehealth users live in rural areas where it can be a long drive to make a routine doctor office visit. The doctors involved in telehealth also like it for routine office visits. They do complain, however, that not enough insurance companies have caught up with the concept and that they often encounter reimbursement problems.

Explosion of Mobile Data Usage. Ericsson, the company that supplies a lot of electronics for the cellular industry, has warned cellular companies to prepare for an explosive growth in cellular data traffic over the next five years. They warn that within five years that the average cellphone user will grow from the average of today’s monthly usage of 5 gigabytes to a monthly usage of 26 gigabytes. They say the usage will be up to 6.9 gigabytes just by the end of this year – a 40% growth over last year.

They say that several factors will contribute to the strong growth. Obviously video usage drives a lot of the usage, but there is also huge annual growth from social media usage as those platforms incorporate more video. They also predict that by 2022, as we start to meld 5G cellular into the network, that users will feel more comfortable using data on their cellphones.

New Satellite Broadband. ViaSat just launched a new satellite that will allow for data speeds up to 200 Mbps. The satellite was recently launched and that has a throughput of 300 gigabits per second. The satellite is expected to be placed into service in early 2018 and will boost the company’s Excede broadband product.

The new satellite, dubbed ViaSat 2, will originally augment and eventually replace the company’s current ViaSat 1 satellite. The company currently serves 659,000 customers from the ViaSat 1 satellite plus a few it purchased from WildBlue in 2009. The new satellite will allow an expansion of the customer base.

The company expects that the majority of customers will continue to buy data products with speeds up to 25 Mbps, like those already offered by Excede. This tells me that the faster speeds, while available, are going to be expensive. This satellite will still be in a high earth orbit, which means the continued high latency that makes satellite service incompatible with any real-time applications. And there is no word if the larger capacity will allow the company to raise the stingy data caps that customers seem to universally hate.

Growth of Music Streaming. Nielsen released statistics that show that streaming audio is growing at an explosive rate and seems to have crossed the threshold to become the primary way that most people listen to music. Audio streams in 2017 are 62% higher than just a year ago. The industry has grown from an annual number of 113.5 billion steams to 184 billion in just one year.

Nielsen estimates that total listens to music from all media including albums and music downloads will be 235 billion this year, meaning that streaming video now accounts for 78% of all music listened to.

And this growth has made for some eye-popping numbers. For example, Drake’s release of More Life in March saw 385 million streams in the week after release. Those kinds of numbers swamp the number of people that would listen to a new artist under older media.

The Consequences of Killing Network Neutrality

It looks almost certain that the FCC is going to kill Title II regulation, and with it net neutrality. Just as happened the last go around the FCC has already received millions of comments asking it to not kill net neutrality. And if you read all of the press you find dire predictions of the consequences that will result from the death of net neutrality. But as somebody who has a decent understanding of the way that broadband and the associated money flows in the industry I don’t think it will be as dire as critics predict, and I think there will also be unanticipated consequences.

Impact on Start-ups – the Cost of Access. One of the dire predictions is that a new start-up company that uses a lot of broadband – the next Netflix, Vine or Snapchat – won’t be able to gain the needed access with carriers, or that their access will be too expensive. Let me examine that conjecture:

  • Let me follow the flow of money that a start-up needs to spend to be on the web. Their direct largest cost is the cost of uploading their content onto the web through an ISP. The pricing for bulk access has always favored the bigger players and it’s more expensive today for a company that wants to upload a gigabyte per day compared to somebody that uploads a terabyte.
  • The normal web service doesn’t pay anything to then deliver their content to customers. Customers buy various speeds of download and use the product at will. Interestingly, it’s only the largest content providers that might run into issues without net neutrality. The big fights a few years ago on this issue were between Netflix and the largest ISPs. The Netflix volumes had grown so gigantic that the big ISPs wanted Netflix to somehow contribute to the big cost of electronics the ISPs were expending to distribute the service. The only way that there would be some cost to start-ups to terminate content would be if the ISPs somehow created some kind of access fee to get onto their network. But that sounds largely impractical. Bytes are bytes and they don’t exactly contain the name and billing address of the party that dumped the traffic on the web.
  • Some content like live video is a complicated web product. You can’t just dump it on the web at one location in the country and hope it maintains quality everywhere it ends up. There are already companies that act as the intermediary for streaming video to carry out the caching and other functions needed to maintain video quality. Even the big content providers like SlingTV don’t tackle this alone.
  • Finally, there will arise new vendors that will assist start-ups by aggregating their traffic with others. We already see that today with Amazon which is bundling the content of over 90 content providers on its video platform. The content providers benefit by taking advantage of the delivery mechanisms that Amazon has in place. This is obviously working and it’s hard to see how the end of net neutrality would stop somebody like Amazon from being a super-bundler. I think wholesalers like Amazon would fill the market gap for start-ups.

Paid Prioritization. The other big worry voiced by fans of Title II regulation is that it stops paid prioritization, or Internet fast lanes. There are both good and bad possible consequences of that.

  • It’s silly to pretend that we don’t already have significant paid prioritization – it’s called peering. The biggest content providers like Google, Netflix and Amazon have negotiated peering arrangements where they deliver traffic directly to ISPs in specific markets. The main benefits of this for the content providers is that it reduces latency and delay, but it also saves them from buying normal uploads into the open Internet. For example, instead of dumping content aimed at Comcast in Chicago onto the open web these big companies will directly deliver the Chicago-bound traffic to Comcast. These arrangements save money for both parties. And they are very much paid prioritization since smaller content providers have to instead route through the major Internet POPs.
  • On the customer side of the network, I can envision ISPs offering paid prioritization as a product to customers. Customer A may choose to have traffic for a medical monitoring company always get a priority, customer B might choose a gaming service and customer C might choose a VoIP connection. People have never had the option of choosing what broadband connections they value the most and I could see this being popular – if it really works.
  • And that leads into the last big concern. The big fear about paid prioritization is that any service that doesn’t have priority is going to suffer in quality. But will that really happen? I have a fairly good broadband connection at 60 Mbps. That connection can already deliver a lot of different things at the same time. Let’s say that Netflix decided to pay my ISP extra to get guaranteed priority to my house. That might improve my Netflix reception, although it already seems pretty good. But on my 60 Mbps connection would any other service really suffer if Netflix has priority? From what I understand about the routing of Internet traffic, any delays caused by such prioritization would be miniscule, probably in microseconds, which would be nearly imperceptible to me. I can already crash my Internet connection today if I try to download more content than it can handle at the same time. But as long as a customer isn’t doing that, I have a hard time seeing how prioritization will cause much problem – or even why somebody like Netflix would pay an ISP extra for it. They are already making sure they have a quality connection through peering and other network arrangements and I have a hard time understanding how anything at the customer end of the transaction would make much difference. This could be important for those on slow broadband connections – but their primary problem is lack of broadband speed and they are already easily overwhelmed by too much simultaneous traffic.

I am not as fearful of the end of net neutrality as many because I think the Internet operates differently than what people imagine. I truly have a hard time seeing how the ending net neutrality will really change the way I receive broadband at my home. However, I do have big concerns about the end of Title II regulation and fear things like data caps and of my ISP using my personal information. I think most of folks real concern is about Title II regulation, but that’s too esoteric for most folks and we all seem to be using the term ‘network neutrality’ as a substitute for that.

The Need for Fiber Redundancy

I just read a short article that mentioned that 30,000 customers in Corvallis, Oregon lost broadband and cable service when a car struck a utility pole and cut a fiber. It took Comcast 23 hours to restore service. There is nothing unusual about this outage and such outages happen every day across the country. I’m not even sure why this incident made the news other than that the number of customers that lost service from a single incident was larger than normal.

But this incident points to the issue of network redundancy – the ability of a network to keep working after a fiber gets cut. Since broadband is now becoming a necessity and not just a nice-to-have thing we are going to be hearing a lot more about redundancy in the future.

Lack of redundancy can strike anywhere, in big cities or small – but the effects in rural areas can be incredibly devastating. A decade ago I worked with Cook County, Minnesota, which is a county in the far north of the state. The economy of the county is driven by recreation and they were interested in getting better broadband. But what drove them to get serious about finding a solution was an incident that knocked out broadband and telephone to the entire county for several days. They County has now built their own fiber network that now includes redundant route diversity to the rest of the world.

We used to have this same concern about the telephone networks and smaller towns often got isolated from making or receiving calls when there was a cable cut. But as cellphones have become prevalent the cries about losing landline telephone have diminished. But the cries about lack of redundancy are back after communities suffer the kinds of outages just experienced by Corvallis. Local officials and the public want to know why our networks can’t be protected against these kinds of outages.

The simple answer is money. It often means building more fiber, and at a minimum it takes a lot more expensive electronics to create network redundancy. The way that redundancy works is simple – there must be separate fiber or electronic paths to provide service to an area in order to provide two broadband feeds. This can be created in two ways. On larger networks it’s created with fiber rings. In a ring configuration two sets of electronics are used to send every fiber signal in both directions around a fiber. In that configuration, when a fiber is cut the signal is still being received from the opposite direction. The other (and even more expensive) way to create diversity is to lay two separate fiber networks to reach a given location.

Route redundancy tends to diminish as a network gets closer to customers. In the US we have many different types of fiber networks. The long-haul fiber networks that connect the NFL cities are largely on rings. From the major cities there are then regional fiber networks that are built to reach surrounding communities. Some of these networks are also on fiber rings, but a surprising number are not and face the same kind of outages that Cook County had. Finally, there are local networks built of fiber, telephone copper, or coaxial cable that are built to get to customers. It’s rare to see route diversity at the local level.

But redundancy can be added anywhere in the network, at a cost. For example, it is not unusual for large businesses to seek local route diversity. They most often achieve this by buying broadband from more than one provider. But sometimes this doesn’t work if those providers are sharing the same poles to reach the business. I’ve also seen fiber providers create a local ring for large businesses willing to pay the high price for redundancy. But most of the last mile that we all live and work on has no protection. We are always one local disaster away from losing service like happened in Corvallis.

But the Corvallis outage was not an outage where a cut wire knocked out a dozen homes on a street. The fiber that got cut was obviously one that was being used to provide coverage to a wide area. A lot of my clients would not design a network where an outage could affect so many customers. If they served a town the size of Corvallis they would build some local rings to significantly reduce the number of customers that could be knocked out by an outage.

But the big ISPs like Comcast have taken shortcuts over the years and they have not spent the money to build local rings. But I am not singling out Comcast here because I think this is largely true of all of the big ISPs.

The consequences of a fiber cut like the one in Corvallis are huge. That outage had to include numerous businesses that lost their broadband connection for a day – and many businesses today cannot function without broadband. Businesses that are run out of homes lost service. And the cut disrupted homework, training, shopping, medical monitoring, security alarms, banking – you name it – for 30,000 homes and businesses.

There is no easy fix for this, but as broadband continues to become essential in our lives these kinds of outages are going to become less acceptable. We are going to start to hear people, businesses, and local governments shouting for better network redundancy, just as Cook County did a decade ago. And that clamor is going to drive some of these communities to seek their own fiber solution to protect from the economic devastation that can come with even moderate network outages. And to some degree, if this happens the carriers will have brought this upon themselves due to pinching pennies and not making redundancy a higher priority in network design.

Shaking Up the FTTP Industry

Every once in a while I see something in the equipment market that surprises me. One of my clients recently got pricing for building a gigabit PON FTTP network from the Chinese company ZTE. The pricing is far under the market price for other brands of equipment, and it makes me wonder if this is not going to put downward price pressure on the rest of the industry.

There are two primary sets of electronics in a PON network – the OLT and ONTs. The OLT (Optical Line Terminal) is a centrally located piece of equipment that originates the laser signal headed towards customers. The OLT is basically a big bay of lasers that talk to customers. The ONT (Optical Network Terminal) is the device that sits at a customer location that has the matching laser that talks back to the OLT.

ZTE’s pricing is industry shaking. They have priced OLTs at almost a third of the price of their competition. They have been able to do this partially by improving the OLT cards that hold the lasers and each of their cards can connect to twice as many customers as other OLTs. This makes the OLT smaller and more energy efficient. But that alone cannot account for the discount and their pricing is obviously aimed at gaining a foothold in the US market.

The ONT pricing is even more striking. They offer a gigabit Ethernet-only indoor ONT for $45. That price is so low that it almost turns the ONT into a throw away item. This is a very plain ONT. It has one Ethernet port and does not have any way to connect to existing inside wiring for telephone or cable TV. It’s clearly meant to work with WiFi at the customer end to deliver all services. Their pricing is made even more affordable by the fact that they offer lower-than-normal industry prices for the software needed to activate and maintain in future years.

This pricing is going to lead companies to reexamine their planned network design. A lot of service providers still use traditional ONTs that contain multiple Ethernet ports and that also have ports for connection to both telephone copper and cable company coaxial wiring. But those ONTs are still relatively expensive and the most recent quotes I’ve seen put these between $200 and $220.

Using an Ethernet-only ONT means dumping the bandwidth into a WiFi router and using that for all services. That means having to use voice adapters to provide telephone service, similar to what’s been used by VoIP providers for years. But these days I have clients that are launching fiber networks without a voice product, and even if they want to support VoIP the adapters are relatively inexpensive. This network design also means delivering only IPTV if there is a cable product and this ONT could not be used with older analog-based cable headends.

ZTE is an interesting company. They are huge in China and are a $17 Billion company. They make a lot of cellphones, which is their primary product line. But they also make a lot of different kinds of telecom gear like this PON equipment. They claim they FTTP equipment is widely used in China and that they have more FTTP customers connected than most US-based vendors.

This blog is not a blanket endorsement of the company. They have a questionable past. They have been accused of bribery in making sales in Norway and the Philippines. They also were fined by the US Commerce Department for selling technology to North Korea and Iran, both under sanctions. And to the best of my knowledge they are just now trying to crack into the US market, which always is something to consider.

But this kind of drop in FTTP pricing has been needed. It is surprising that OLTs and ONTs from other manufacturers still basically cost the same as they did years ago. We generally expect that as electronics are mass produced that the prices will drop, but we have never seen this in a PON network. One can hope that this kind of pricing will shake up other manufacturers to sharpen their pencils. Larger fiber ISPs already get pricing cheaper than what I mentioned above on today’s equipment. But most of my clients are relatively small and they have little negotiating power with equipment vendors. I hope this shakes the industry a bit – something that’s needed if we want to deploy fiber everywhere.

Our Aging Fiber Infrastructure

One thing that I rarely hear talked about is how many of our long-haul fiber networks are aging. The fiber routes that connect our largest cities were mostly built in the 1990s in a very different bandwidth environment. I have a number of clients that rely on long-haul fiber routes and the stories they tell me scare me about our future ability to move bandwidth where it’s needed.

In order to understand the problems of the long-haul networks it’s important to look back at how these fiber routes were built. Many were built by the big telcos. I can remember the ads from AT&T thirty years ago bragging how they had built the first coast-to-coast fiber network. A lot of other fiber networks were built by competitive fiber providers like MCI and Qwest, which saw an opportunity for competing against the pricing of the big telco monopolies.

A lot of the original fibers built on intercity routes were small by today’s standards. The original networks were built to carry voice and much smaller volumes of data than today and many of the fibers contain only 48 pairs of fiber.

To a large degree the big intercity fiber routes follow the same physical paths, either following interstate highways, but to an even greater extent following the railroad tracks that go between markets. Most companies that move big amounts of data want route diversity to protect against fiber cuts or disasters, yet a significant percentage of the routes between many cities are located next to fibers of rival carriers.

It’s also important to understand how the money works in these routes. The owners of the large fibers have found it to be lucrative to lease pairs of fiber to other carriers on long-term leases called IRUs (indefeasible rights to use). It’s not unusual to be able to shop for a broadband connection between primary and secondary markets, say Philadelphia and Harrisburg, and find a half-dozen different carriers. But deeper examination often shows they all share leased pairs in the same fiber sheath.

Our long-haul fiber network infrastructure is physically aging and I’ve seen a lot of evidence of network failures. There are a number of reasons for these failures. First, the quality of fiber glass today has improved by several magnitudes over glass that was made in the 1980s and 1990s. Some fiber routes are starting to show signs of cloudiness from age which kills a given fiber pair. Probably even more significant is the fact that fiber installation techniques have improved over the years. We’ve learned that if a fiber cable is stretched or stressed during installation that microscopic cracks can be formed that slowly spread over time until a fiber becomes unusable. And finally, we are seeing the expected wear and tear on networks. Poles get knocked down by weather or accidents. Contractors occasionally cut buried fibers. Every time a long-haul fiber is cut it loses a little efficiency, and over time splices can add up to become problems.

Probably the parts of the network that are in the worst shape are the electronics. It’s an expensive proposition to upgrade the bandwidth on a long-haul fiber network because that means not only changing lasers at the end points of a fiber, but at all of the repeater huts along a fiber route. Unless a fiber route is completely utilized the companies operating these routes don’t want to spend the capital dollars needed to improve bandwidth. And so they keep operating old electronics that are often many years past their expected functional lives.

Construction of new long-haul fiber networks is incredibly expensive and it’s rare to hear of any major initiative to build fiber on the big established intercity routes. Interestingly, the fiber to smaller markets is in much better shape than the fiber between NFL cities. These secondary fiber routes were often built by groups like consortiums of independent telephone companies. There were also some significant new fiber routes built using the stimulus funding in 2008.

Today a big percentage of the old intercity fiber network is owned by AT&T, Verizon and CenturyLink. They built a lot of the original network but over the years have also gobbled up many of the other companies that built fiber – and are still doing so, like with Verizon’s purchase last year of XO and CenturyLink’s purchase of Level3. I know a lot of my clients worry every time one of these mergers happens because it removes another of a small handful of actual fiber owners from the market. They are fearful that we are going to go back to the old days of monopoly pricing and poor response to service issues – the two issues that prompted most of the construction of competitive fiber routes in the first place.

A lot of the infrastructure of all types in this country is aging. Sadly, I think we need to put a lot of our long-haul fiber backbone network into the aging category.

The Cost of Building 5G

It seems like I can barely browse industry articles these days without seeing another prediction of the cost of providing fast broadband everywhere in the US. The latest study, just released on July 12 from Deloitte, estimates that it will require at least $130 billion over the next seven years in fiber investment to make the country fully ready for 5G.

Before digesting that number it’s important first to understand what they are talking about. Their study looks at deploying a ‘deep fiber’ network that would bring fiber close to homes and businesses in the country and then use wireless technology to complete the connection to homes. This is not a new concept and for decades we have referred to this as fiber-to-the-curb. This network design never went very far in the past because there wasn’t a good wireless technology around to make that final connection. This differs from an all-fiber connection by replacing a fiber drop wire to the home with wireless electronics. The only way such a network makes sense is if that difference is a significant savings over an all-fiber connection at the home.

We are now on the verge of having the needed wireless technology. There are now some first-generation wireless connections being tested that could finally make this a viable network deployment. And like with everything new, within a decade the wireless electronics needed will improve in function and cost a lot less.

To put the Deloitte estimate into perspective Verizon claimed to have spent $13 billion on their original FiOS fiber network. Because they were able to overlash fiber onto their own telephone wires the FiOS network cost was built at a relatively low cost of $750 per customer passed. But the Verizon FiOS network never blanketed any city and instead they selectively cherry-picked neighborhoods where the construction costs were the lowest. Verizon had originally told Wall Street they were going to spend $24 billion on fiber, but they abandoned a lot of the planned construction when the costs came in higher than they had expected.

But back to the Deloitte number of $130 billion. That is the cost of just the fiber needed to get deep into every neighborhood in the country. It doesn’t include the electronics needed to broadcast the wireless signal or the electronics needed inside homes and businesses to receive the signal. Nobody yet has any estimate of what that is going to cost, but it won’t be cheap, at least not for a few years. The cost of getting onto utility poles, street lighting poles or of constructing urban towers is not going to be cheap. And the cost of the electronics won’t be cheap until it’s gone through a few generations of refinement. Using Deloitte’s same methodology of estimating and assuming a very conservatively low cost of $500 for electronics per customer, this would add another $30 billion if only half the customers in the country use the new 5G network.

The big question that must be asked when tossing out a number like $130 billion is if there is anybody who is interested in deploying wireless loops in this manner? Such a network would be used to directly compete against the big cable companies. What Deloitte is talking about is not faster cellular service, but fast connections into homes and businesses. Are there any companies willing to spend that much money to go head-to-head with cable networks that will soon be able to deliver gigabit speeds?

The obvious candidates are Verizon and AT&T. Verizon has been talking a lot lately about this potential business plan, and so perhaps they might pursue it. AT&T, while bragging about the amount of money they are spending on fiber, has not shown a huge inclination to dive back into the residential broadband market. And there are not a lot of companies with capital budgets big enough to consider this.

Consider the capital budgets of the five largest telcos. AT&T is on track to spend $22B in 2017, but a lot of that is being spent in Mexico. Verizon’s 2017 capex budget is around $17B. CenturyLink spends something a little less than $3B. Frontier spends around $1B and Windstream spends about $0.8B.

It’s clear that unless AT&T and Verizon are willing to redirect the majority of their capital spending to this new technology that it’s not going to go anywhere. I think it’s clear that both AT&T and Verizon are going to be looking hard at the technologies and doing trials. But even should those trials be successful I can’t see them pouring the needed billions in to build ‘deep fiber’ everywhere. It’s far more likely that the technology will be deployed in the same way that Verizon deployed FiOS – built only where the cost is the lowest and ignoring everybody else.

Both of these companies understand that it’s not going to be easy to wrestle customers back from the big cable companies. Just building these fiber networks is a daunting financial investment – one that Wall Street would likely punish them for undertaking. But even building the needed networks is not going to be any assurance of market success unless they can convince customers they are a better bargain. I just don’t see these companies going hog wild in making the needed investments to deploy this widely, but instead see this as the newest technology for cherry-picking the best opportunities.

The Future of AT&T and Verizon

The cellphone companies have done such a great job of getting everybody to purchase a smartphone that cellular service in the country is quickly turning into a commodity. And, as is typical with most commodity products, that means less brand loyalty from customers and lower market prices for the products.

We’ve recently seen the cellular market demonstrate the turn toward becoming a commodity. In the first quarter of this year the cellular companies had their worse performance since back when they began. Both AT&T and Verizon posted losses for post-paid customers for the quarter. T-Mobile added fewer customers than expected and Sprint continued to lose money.

This is a huge turnaround for an industry where the big two cellular companies were each making over $1 billion per month in profits. The change in the industry comes from two things. First, people are now shopping for lower prices and are ready to change carriers to get lower monthly bills. The trend for lower prices was started by T-Mobile to gain market share, but low prices are also being pushed by cellular resellers – being fed by the big carriers. The cellular industry is only going to get more competitive when the cable companies soon enter the market. That will provide enough big players to make cellular minutes a true commodity. The cable companies have said they will be offering low prices as part of packages aimed at making customers stickier and will put real price pressure on the other cellular providers.

But the downturn in the first quarter was almost entirely due to the rush by all of the carriers to sell ‘unlimited’ data plans – which, as I’ve noted in some earlier blogs, are really not unlimited. But these plans offer lower prices for data and are freeing consumers to be able to use their smartphones without the fear of big overage fees. Again, this move was started by T-Mobile, but it was also driven heavily by public demand. AT&T and Verizon recognized that if they didn’t offer this product set that they were going to start bleeding customers to T-Mobile.

It will be really interesting to watch what happens to AT&T and Verizon, who are now predominantly cellular companies that also happen to own networks. The vast majority of revenues for these companies comes from the cellular parts of their companies. When I looked at both of their annual reports last year I had a hard time finding evidence that these companies were even in the landline network business. Discussions of those business lines are buried deeply within the annual reports.

These companies obviously need to find new forms of revenues to stay strong. AT&T is tackling this for now by going in a big way after the Mexican market. But one only has to look down the road a few years to see that Mexico and any other cellular market will also trend towards commoditization.

Both companies have their eyes on the same potential growth plays:

  • Both are making the moves necessary to tackle the advertising business. They look at the huge revenues being made by Facebook and Google and realize that as ISPs they are sitting on customer data that could make them major players in the targeted marketing space. Ad revenues are the predominant revenue source at Google and if these companies can grab even a small slice of that business they will make a lot of money.
  • Both are also chasing content. AT&T’s bid for the purchase of Time Warner is still waiting for government approval. Verizon has made big moves with the purchases of AOL and Yahoo and is rumored to be looking at other opportunities.
  • Both companies have been telling stockholders that there are huge amounts of money to be made from the IoT. These companies want their cellular networks to be the default networks for collecting data from IoT devices. They certainly ought to win the business for things like smart cars, but there will be a real battle between cellular and WiFi/landline connections for most other IoT usage.
  • Both companies are making a lot of noise about 5G. They are mostly concentrating on high-speed wireless connections using millimeter wave spectrum that they hope will make them competitive with the cable companies in urban areas. But even that runs a risk because if we see true competition in urban areas then prices for urban broadband might also tumble. And that might start the process of making broadband into a commodity. On the cellular side it’s hard to think that 5G cellular won’t quickly become a commodity as well. Whoever introduces faster cellphone data speeds might get a bump upward for a few years, but the rest of the industry will certainly catch up to any technological innovations.

It’s hard to foresee any business line where AT&T and Verizon are going to get the same monopoly power that they held in the cellular space for the past few decades. Everything they might undertake is also going to be available to competitors, meaning they are unlikely to make the same kind of huge margins they have historically made with cellular. No doubt they are both going to be huge companies for many decades to come since they own the cellular networks and spectrum. But I don’t think we can expect them to be the cash cows they have been in the past.

The Return of Edge Computing

We just went through a decade where the majority of industry experts told us that most of our computing needs were going to move to the cloud. But it seems that that trend is starting to reverse somewhat and there are many applications where we are seeing the return of edge computing. This trend will have big implications for broadband networks.

Traditionally everything we did involved edge computing – or the use of local computers and servers. But a number of big companies like Amazon, Microsoft and IBM convinced corporate America that there were huge benefits of cloud computing. And cloud computing spread to small businesses and homes and almost every one of us works in the cloud to some extent. These benefits are real and include such things as:

  • Reduced labor costs from not having to maintain an in-house IT staff.
  • Disaster recovery of data due to storing data at multiple sites
  • Reduced capital expenditures on computer hardware and software
  • Increased collaboration due to having a widely dispersed employee base on the same platform
  • The ability to work from anywhere there is a broadband connection.

But we’ve also seen some downsides to cloud computing:

  • No computer system is immune from outages and an outage in a cloud network can take an entire company out of service, not just a local branch.
  • A security breach into a cloud network exposes the whole company’s data.
  • Cloud networks are subject to denial of service attacks
  • Loss of local control over software and systems – a conversion to cloud often means losing valuable legacy systems, and functionality from these systems is often lost.
  • Not always as cheap as hoped for.

The recent move away from cloud computing comes from computing applications that need huge amounts of computing power done in real time. The most obvious examples of this is the smart car. Some of the smart cars under development run as many as 20 servers onboard the car, making them a driving datacenter. There is no hope of ever moving the brains from smart cars or drones to the cloud due to the huge amounts of data that must be passed quickly between the car’s sensors and its computers. Any external connection is bound to have too much latency to make true real-time decisions.

But smart cars are not the only edge devices that don’t make sense on a cloud network. Some other such applications include:

  • Drones have the same concerns as cars. It’s hard to imagine a broadband network that can be designed to always stay in contact with a flying drone or even a sidewalk delivery drone.
  • Industrial robots. Many new industrial robots need to make decisions in real-time during the manufacturing process. Robots are no longer just being used to assemble things, but are also being used to handle complex tasks like synthesizing chemicals, which requires real-time feedback.
  • Virtual reality. Today’s virtual reality devices need extremely low latencies in order to deliver a coherent image and it’s expected that future generations of VR will use significantly more bandwidth and be even more reliant on real-time communications.
  • Medical devices like MRIs also require low latencies in order to pass huge data files rapidly. As we built artificial intelligence into hospital monitors the speed requirement for real-time decision making will become even more critical.
  • Electric grids. It turns out that it doesn’t take much of a delay to knock down an electric grid, and so local feedback is needed to make split-second decisions when problems pop up on grids.

We are all familiar with a good analogy of the impact of performing electronic tasks from a distance. Anybody my age remembers when you could pick up a telephone, have instant dialtone, and then also got a quick ring response from the phone at the other end. But as we’ve moved telephone switches farther from customers it’s no longer unusual to wait seconds to get a dialtone, and to wait even more agonizing seconds to hear the ringing starting at the other end. Such delays are annoying for a telephone call but deadly for many computing applications.

Finally, one of the drivers to move to more edge computing is the desire to cut down on the amount of bandwidth that must be transmitted. Consider a factory where thousands of devices are monitoring specific operations during the manufacturing process. The idea of sending this mountains of data to a distant location for processing seems almost absurd when local servers can handle the data at faster speeds with lower latency. But cloud computing is certainly not going to go away and is still the best network for many applications. In this factory example it would still make sense to send alarms and other non-standard data to some remote monitoring location even if the data needed to keep a machine running is done locally.

 

White Space Spectrum for Rural Broadband – Part II

Word travels fast in this industry, and in the last few days I’ve already heard from a few local initiatives that have been working to get rural broadband. They’re telling me that the naysayers in their communities are now pushing them to stop working on a broadband solution since Microsoft is going to bring broadband to rural America using white space spectrum. Microsoft is not going to be doing that, but some of the headlines could make you think they are.

Yesterday I talked about some of the issues that must be overcome in order to make white space spectrum viable. It certainly is no slam dunk that the spectrum is going to be viable for unlicensed use under the FCC spectrum plan. And as we’ve seen in the past, it doesn’t take a lot of uncertainty for a spectrum launch to fall flat on its face, something I’ve seen a few times just in recent decades.

With that in mind, let me discuss what Microsoft actually said in both their blog and whitepaper:

  • Microsoft will partner with telecom companies to bring broadband by 2022 to 2 million of the 23.4 million rural people that don’t have broadband today. I have to assume that these ‘partners’ are picking up a significant portion of the cost.
  • Microsoft hopes their effort will act as a catalyst for this to happen in the rest of the country. Microsoft is not themselves planning to fund or build to the remaining rural locations. They say that it’s going to take some combination of public grants and private money to make the numbers work. I just published a blog last Friday talking about the uncertainty of having a federal broadband grant program. Such funding may or may not ever materialize. I have to wonder where the commercial partners are going to be found who are willing to invest the $8 billion to $12 billion that Microsoft estimates this will cost.
  • Microsoft only thinks this is viable if the FCC follows their recommendation to allocate three channels of unlicensed white space spectrum in every rural market. The FCC has been favoring creating just one channel of unlicensed spectrum per market. The cellular companies that just bought this spectrum are screaming loudly to keep this at one channel per market. The skeptic in me says that Microsoft’s white paper and announcement is a clever way for Microsoft to put pressure on the FCC to free up more spectrum. I wonder if Microsoft will do anything if the FCC sticks with one channel per market.
  • Microsoft admits that for this idea to work that manufacturers must mass produce the needed components. This is the classic chicken-and-egg dilemma that has killed other deployments of new spectrum. Manufacturers won’t commit to mass producing the needed gear until they know there is a market, and carriers are going to be leery about using the technology until there are standardized mass market products available. This alone could kill this idea just as the FCC’s plans for the LMDS and MMDS spectrum died in the late 1990s.

I think it’s also important to discuss a few important points that this whitepaper doesn’t talk about:

  • Microsoft never mentions the broadband data speeds that can be delivered with this technology. The whitepaper does talk about being able to deliver broadband to about 10 miles from a given tower. One channel of white space spectrum can deliver about 30 Mbps up to 19 miles in a point-to-point radio shot. From what I know of the existing trials these radios can deliver speeds of around 40 Mbps at six miles in a point-to-multipoint network, and less speed as the distance increases. Microsoft wants multiple channels in a market, because bonding multiple channels could greatly increase speeds to perhaps 100 Mbps. But even with one channel this is great broadband for a rural home that’s never had broadband. But the laws of physics means these radios will never get faster and those will still be the speeds offered a decade and two from now when those speeds are going to feel like slow DSL does today. It seems like too many broadband technology plans fail to recognize the fact that our demand for broadband has been doubling every three years since 1980. What’s pretty good speeds today can become inadequate in a surprisingly short period of time.
  • Microsoft wants to be the company to operate the wireless databases behind this and other spectrum. That gives them a profit motive to spur the wireless spectrums to be used. There is nothing wrong with wanting to make money, but this is not a 100% altruistic offer on their part.

It’s hard to know what to conclude about this. Certainly Microsoft is not bringing broadband to all of rural America. But it sounds like they are willing to work towards making this work. But we can’t ignore the huge hurdles that must be overcome to realize the vision painted by Microsoft in the white paper.

  • First, the technology has to work and the interference issues I discussed in yesterday’s blogs need to be solved for anybody to trust using this spectrum on an unlicensed basis. Nobody will use this spectrum if unlicensed users constantly get bumped off by licensed ones. The trials done for this spectrum to date were not done in a busy spectrum environment.
  • Second, somebody has to be willing to fund the $8B to $12B Microsoft estimates this will cost. There may or may not be any federal grants ever available for this technology, and there may never be commercial investors willing to spend that much on a new technology in rural America. The fact that Microsoft thinks this needs grant funding tells me that a business plan based upon this technology might not stand on its own.
  • Third, the chicken-and-egg issue of getting over the hurdle to have mass-produced gear for the spectrum must be overcome.
  • Finally, the FCC needs to adopt Microsoft’s view that there should be 3 unlicensed channels available everywhere – something that the licensed holders are strongly resisting. And from what I see from the current FCC, there is a god chance that they are going to side with the big cellular companies.