The Next Big Thing

I’ve always been somewhat amused to read about the colossally important technology trends that are right around the corner. These trends are mostly driven by the wishful thinking of vendors, and they have rarely come true, at least to the extent that is predicted. Even when the next big thing comes to pass, it’s almost never at the predicted magnitude. There has been at least one of these big trends announced every year, and here are a few of the more interesting ones.

I can remember when it was announced that we would be living in an Internet of Things world. Not only would our houses be stuffed full of labor-savings IOT devices, but our fields, forests, and even the air around us would be full of small sensors that would give us feedback on the world around us. The reality was not the revolution predicted by the industry press, but over a decade, most of us now have smart devices in our homes. But the fields, forests, and surrounding environment – not so much.

The IOT trend was followed by big pronouncements that we’d all be adopting wearables. This was not only devices like Google Glass, but we’d all have wearables built into our everyday clothes so that we could effortlessly carry a computer and sensors with us everywhere. This prediction was about as big of a flop as imaginable. Google Glass crashed and burned when the public made it clear that nobody wanted everyday events to be live streamed. Other than gimmicks at CES, there was no real attempt at smart clothes.

But wearables weren’t the biggest flop of all – that is reserved in my mind for 5G. The hype for 5G swamps the hype for all of the other big trends combined. 5G was going to transform the world. We’d have near gigabit speeds everywhere, and wireless was going to negate the need for investing in fiber broadband networks. 5G was going to enable fleets of driverless cars. 5G would drive latency so low that it was going to be the preferred method for connection by gamers and stock traders. There was going to be 5G small cell sites on every corner, and fast wireless broadband would be everywhere. Instead of 5G, we got a watered-down version of 4G LTE labeled as 5G. Admittedly, cellular broadband speeds are way faster, but none of the predicted revolution came to pass.

A few predictions came to pass largely as touted – although at a much slower pace. Five years ago, we were told that everything was going to migrate to the cloud. Big corporations were going to quickly ditch internal computing, and within a short time, the cloud would transform computing. It didn’t happen as quickly as predicted, but we have moved a huge amount of our computing lives into the cloud. Tasks like gaming, banking, and most of the apps we’ve come to rely on are in the cloud today. The average person doesn’t realize the extent that they rely on the cloud until they lose broadband and realize how little of the things they do are stored in the computers at their homes and offices.

This blog was prompted by the latest big trend. The press is full of stories about how computing is moving back to the edge. In case the irony of that escapes you, this largely means undoing a lot of the big benefits of going to the cloud. There are some good reasons for this shift. For example, the daily news about hacking has corporations wondering if data will be safer locally than in the cloud. But the most important reason cited for the movement to edge computing is that the world is looking for extremely low latency – and this can only come when computer processing is done locally. The trouble with this prediction is that it’s hard to find applications that absolutely must have a latency of less than 10 milliseconds. I’m sure there are some, but not enough to make this into the next big trend. I could be wrong, but history would predict that this will happen to a much smaller degree than being touted by vendors.

All big technology trends have one big weakness in common – the fact that the world naturally resists change. Even when the next big thing has clear advantages, there must be an overwhelming reason for companies and people to drop everything to immediately adopt something new, and that usually is untested in the market. Most businesses have learned that being an early adapter is risky – a new technology can bring a market edge, but it can also result in having egg on one’s face.

Should DSL Cost Less Than Fiber?

As I was going through my pile of unread articles, I found an article from the Associated Press that asked how big ISPs can get away with charging the same prices in urban areas for both slow and fast broadband. The article was about Shirley Neville, in New Orleans, who found that she was paying the same price for 1 Mbps DSL from AT&T as other city residents are paying for a fiber connection.

It’s a great question, and I was surprised that I hadn’t thought to write about it before. I investigate broadband prices around the country, and it’s not unusual to find the price for fiber broadband in a city set close to the price charged for DSL.

It would be easy to justify charging the same price for both technologies if AT&T was in the process of converting everybody in New Orleans to fiber. In fact, if that was the reason, I’d be impressed that AT&T wasn’t charging more for the technology upgrade. But this is not the situation. It’s clear that the AT&T fiber business plan is to build fiber to small pockets of cities, but not everywhere. The chances are high that Shirley Neville’s neighborhood and many others will not be getting fiber soon from AT&T, if ever. For every neighborhood that gets fiber, there will be many that will never see AT&T fiber.

Another possibility is that AT&T’s low price for a fiber connection is an introductory price to lure people to switch from Cox, the cable company. Perhaps when the introductory price expires the fiber price will be higher than DSL. This still doesn’t feel like a great answer to Shirley’s question since AT&T is willing to give a fiber customer a big break.

The most likely answer to the question is the ugliest. AT&T doesn’t feel like it needs to reduce the price of DSL in the city because DSL customers are a captive audience. Cox has some of the highest broadband prices in the country, and that gives cover for AT&T to charge whatever it wants for DSL as long as the price is lower than Cox.

Another reason that AT&T can charge the same for DSL and fiber is that there isn’t anybody to tell the company that it shouldn’t do so. The FCC eliminated broadband regulation and the Louisiana Public Service Commission doesn’t assert any authority over broadband prices. Folks like Shirley Neville don’t have anybody looking out for them, and the big ISPs can overcharge customers with impunity.

As the article points out, Shirley’s question is germane today because of the FCC’s investigation of digital discrimination. The article cites an investigation by The Markup, which analyzed over 800,000 broadband offerings from AT&T, Verizon, Earthlink, and CenturyLink in 38 cities across America and found that the four ISPs regularly offer broadband speeds at 200 Mbps or faster at the same price as broadband with speeds under 25 Mbps.

The Markup analysis shows that the neighborhoods with the worse speed options have lower median household incomes in 90% of the cities studied. Where The Markup could gather the data, it also looks like the big ISPs offered the worst deals to the least-white neighborhoods.

USTelecom responded to the issue by stating that the high cost of maintaining old copper networks justifies high prices for DSL. The article cites Marie Johnson of USTelecom writing that “Fiber can be hundreds of times faster than legacy broadband—but that doesn’t mean that legacy networks cost hundreds of times less. Operating and maintaining legacy technologies can be more expensive, especially as legacy network components are discontinued by equipment manufacturers”.

That’s exactly the response I would expect to defend monopoly pricing. Nobody expects the price of DSL to be hundreds of times less than fiber – but DSL should cost less. The big telcos have argued for decades that it costs too much to maintain copper networks. But they never finish that statement by telling us how much money they have collected over the years from a customer like Shirley Neville – possibly hundreds of times more than the cost of her share of the network.

Amazon’s Huge IoT Network

In a recent blog post, Amazon invited developers to test drive its gigantic IoT network. This network has been labeled as Sidewalk and was created by tying together all of Amazon’s wireless devices like Amazon Echos and Ring cameras.

Amazon claims this huge wireless network now covers 90% of U.S. households. Amazon created the network by transmitting Bluetooth and 900 MHz LoRa signals from its various devices. This network provides a benefit to Amazon because it can detect and track its own devices separate from anything a homeowner might do with WiFi.

But Amazon has intended for years to monetize this network, and this announcement begins that process. This network has been under-the-radar until now, and most homeowners have no idea that their Amazon devices can connect and communicate with other devices outside the home. Amazon swears that the IoT connection between devices is separate from anything happening inside the house using WiFi – that the IoT network is a fully separate network.

Anyplace where there are more than a few Amazon devices, the network should be robust. The 900 MHz spectrum adds a lot of distance to the signals, and it’s a frequency that does a good job of penetrating obstacles like homes and trees.

Amazon believes that this network can be used by IoT device makers to improve the performance of IoT devices in a neighborhood – things like smart thermostats, appliance sensors, and smart door locks. Such devices use only a small amount of bandwidth but are reliant on the home broadband network being operational to work. Amazon’s vision with this network is that your smart door lock will still work even when your home WiFi isn’t working.

By making the network available to others, Amazon can unleash developers to create new types of wireless devices. For example, it’s always been a challenge to use outdoor sensors since WiFi signals outside of homes is weak and inconsistent. It’s not hard to imagine a whole new array of sensors enabled by the Sidewalk network. Picture a motion detector on a shed door or a leak detector on outdoor faucets. With this network, vendors can now manufacture such devices with the knowledge that most homes will be able to make the needed wireless connection.

This also holds a lot of promise for municipal and business sensors. This is a low-cost way to communicate with smart city or other sensors. This would enable, for the first time, the deployment of environment sensors anywhere within range of the Sidewalk network.

This is another interesting venture by Amazon. At least in the U.S., this is a lower-cost solution than trying to connect to IoT devices by satellite. The only cost of building this network for Amazon was adding the wireless capability to its devices – mere pennies when deployed across millions of devices. But interestingly, Amazon will also have a satellite network starting in 2025 that can fill in the gaps where the Sidewalk network can’t reach.

Amazon says that it has already made deals to test the network with companies like Netvox, OnAsset, and Primax. Now that manufacturers know this network exists and is available, this ought to open up a wide range of new IoT devices that are not reliant only on WiFi. This might finally be the network that enables the original promise of IoT of a world with sensors everywhere, keeping tabs on the environment around us.

Some Musings on Telecom Valuations

One of the most interesting things I’ve witnessed in the industry over my career is how the valuation for telecom companies have increased and decreased over time. Telecom companies are generally valued and sold based on a multiple of earnings. Companies with a higher margin per customer are worth more than companies with lower margins. This method of valuation applies to telephone companies, cable companies, and fiber overbuilders.

For more than a decade, the valuation of small telephone companies has hovered around a base valuation of five times EBITDA (earnings before interest, taxes, depreciation, and amortization). While the price somebody is willing to pay for a company is more complex than that simple math, this basic metric has provided a good way to guess the relative value of a telco by starting with that math.

A given company might sell something other than this average valuation. For example, there might be a motivated buyer willing to pay more, such as a neighboring company that understands the boost to combined margins through economy of scale. Properties sometimes sell for less than the expected valuation if the owners have decided it’s time to exit the business and don’t want to wait for a higher offer.

If you look back twenty years, valuations for telcos and small cable companies sold for ten to twelve times EBITDA. Twenty years ago was the beginning of the transition of small telcos and cable companies into becoming ISPs. Buyers recognized that broadband sales would increase over time and recognized this potential in setting a valuation for these companies. Buyers were willing to pay more to gain the upside from future broadband sales.

After the peak valuations of twenty years ago, valuations dropped over time. Rural telephone companies started to lose the historic subsidies that had bolstered earnings. Small cable companies started to see a serious erosion of cable TV margins as the price of programming skyrocketed. Buyers were less willing to buy into a company with lowered future expectations, and values dropped accordingly. I recall talking to telcos that got offers to sell at multiples of only three or four times earnings.

Over the last few years, valuations have climbed again – at least for some companies. Telcos that invested in fiber and cable companies that upgraded to gigabit capabilities have become worth more to buyers. Companies that didn’t make these upgrades are worth a lot less.

One of the interesting changes in the industry is that external venture capital has become interested in buying telecom properties. When the industry valuations hit the lowest point, most sales of telecom companies were made to other telecom companies. It seems like external interest in the industry has ratcheted up valuations. I always have to wonder if outsiders understand the industry well enough to be willing to pay more for businesses than folks who have been in the industry forever.

There were a few factors that led to increased valuations in recent years. One is historically low interest rates that made it easier and more affordable for buyers to finance the purchase of companies. I also think valuations went up as some ISPs demonstrated the ability to gain near-monopolies in markets. I guess this emboldens buyers that they can duplicate this with a company they purchase.

I’m suddenly talking to companies that are being offered multiples of as much as ten times earnings. That puts these companies back to the heady valuations of 2000. It’s going to be interesting to see how many small telcos and cable companies sell when valuations are high – it has to be tempting.

I’m frankly perplexed by valuation in the ten times range. If a buyer pays ten times earnings and doesn’t improve the business, it will take ten years just to get back the investment – without considering the cost of the debt used to finance the purchase. A buyer has to make huge improvements to an acquisition to get the investment back in a reasonable time. The upside can come from increased revenues, reduced expenses, or a combination of the two. It’s not easy to squeeze that much improvement out of a telecom business without alienating customers.

The Trade War for Undersea Fiber

A recent article by Joe Brock for Reuters describes a new geopolitical battle over undersea fibers. There are about 400 undersea fiber routes that cross oceans and that connect the world with fiber. This is a huge business, and about 95% of all international broadband traffic passes through the undersea fibers.

There has always been some concern about undersea fibers. Countries fear that sabotage of the fibers connected to their shores could result in being isolated from the Internet. For example, there were several undersea fiber cuts in recent years that isolated Taiwan. These cuts were blamed on fishing boats and not on China, but the cuts highlight a vulnerability in the networks that drive international commerce.

I’ve also read a few other articles that claim that undersea fibers are vulnerable to eavesdropping and spying and that countries with sophisticated technology could be listening in on the traffic that crosses the seas.

The article focuses on a recent trade battle between China and the West over laying a new fiber route that is planned for 2025 construction that would go from France to Singapore and connect to twenty countries along the way. The cable route is known as the South East Asia–Middle East–Western Europe 6, or SeaMeWe-6 route, for short.

The article describes the complicated consortiums that fund undersea fiber routes. This particular route included more than a dozen investors, which are mostly large companies that have to transport huge amounts of international data traffic. The partners on this project included companies like Microsoft, the EU’s Orange, and India’s Bharti Airtel along with China Telecom, China Mobile and China Unicom.

It initially looked like the technology award for the electronics and construction was going to go to HMN Technologies Co Ltd. for around $500 million. This is a Chinese company that was originally created by Huawei, but which was spun off as a standalone company. The primary competitor bidding for the  route was SubCom LLC, an American company.

Things quickly got complicated since the US and China are now embroiled in a trade war that covers a huge range of industries, including undersea fibers. After the deal was awarded to the Chinese firm, the US began warning the investors about the espionage risk of dealing with Chinese electronics vendors. The US went so far as to threaten a boycott against HMN Technologies. The various investors were split on the choice of technology vendor, but eventually agreed to spend $100 million more to use the American company.

It’s almost impossible to stress how much of the world economy is reliant on communications through bandwidth. I find it dismaying to see basic infrastructure becoming enmeshed in international politics. The wrangling over this one fiber route is not going to be the end game but is more like the beginning of a trade war that will add cost to international communications.

This is an new escalation in the trade war that has seen the US government ban Huawei and other Chinese telecom electronics from the country. I haven’t the slightest idea about the real risk of international spying through these fibers, and I suspect there are not a lot of folks who truly understand it. I might be cynical, but it stands to reason if there is spying on this kind of traffic by the Chinese, that there Is likely also spying by the West. Microsoft and Orange argued that the threat of data security was not big enough to justify spending more to switch to the American fiber company. But in the end, the pressure from the American government won, and the more expensive vendor was chosen.

Filling a Regulatory Void

Earlier this year, the Ninth Circuit Court of Appeals upheld the net neutrality regulations enacted by California. The appeal case was filed on behalf of big ISPs by ACA Connect, CTIA, NCTA, and USTelecom.

The case stems from the California net naturality legislation passed in 2018. The California law was a direct reaction to the Ajit Pai FCC that not only killed federal net neutrality rules but also wiped out most federal regulation of broadband. The California legislation made it clear that the State doesn’t want ISPs to have an unfettered ability for bad behavior.

The California net neutrality rules are straightforward. The law applies to both landline and mobile broadband. Specifically, the California net neutrality law:

  • Prohibits ISPs from blocking lawful content.
  • Prohibits ISPs from impairing or degrading lawful Internet traffic except as is necessary for reasonable network management.
  • Prohibits ISPs from requiring compensation, monetary or otherwise, from edge providers (companies like Netflix or Google) for delivering Internet traffic or content.
  • Prohibits paid prioritization.
  • Prohibits zero-rating.
  • Prohibits interference with an end user’s ability to select content, applications, services, or devices.
  • Requires the full and accurate public disclosure of network management practices, performance, and clearly worded terms of service.
  • Prohibits ISPs from offering any product that evades any of the above prohibitions.

This is an interesting step in the battle to regulate ISPs. The big ISPs put a huge amount of money and effort into getting the FCC under Ajit Pai to kill federal broadband regulation. There has been a long-standing tradition in the telecom world that cedes that the FCC has the power to make federal rules, but states have always been free to regulate issues not mandated by the FCC. There have been some tussles over the years between states and the FCC, but courts have consistently sided with the FCC’s authority to make national rules. When the FCC walked away from most broadband regulation it created a regulatory void that tradition would imply that states are allowed to fill.

Losing this court case creates a huge dilemma for big ISPs. California is such a large part of the economy that it would be hard for ISPs to follow this law in California and not follow it elsewhere. It also seems likely that other states will now pass similar laws over the next few years, and that will create the worst possible nightmare for big ISPs – different regulations in different states.

I’ve always adhered to the belief that there is a regulatory pendulum. When regulations get too tough for a regulated industry, there is usually a big push to lighten the regulatory burden. But when the pendulum swings the other way and regulation gets too slack, there is inevitably a big push to put more restrictions on the industry being regulated. In this case, the ISPs and Ajit Pai went too far by eliminating most meaningful federal broadband regulation. There is nothing surprising about California and other states reacting to the lack of federal regulation.

With this court decision, there is nothing to stop a dozen states from creating net neutrality rules or tackling the other regulations that got voided by the Ajit Pai FCC. It’s also not hard to predict that the big ISPs will now push to create a watered-down federal version of net neutrality as a way to override a plethora of state rules.

I said earlier that this is a dilemma for large ISPs because it is extremely rare and not easy for a small ISP to violate net neutrality principles. The California rules will require ISPs to create more plain English terms of service, but otherwise, small ISPs in California will not likely be bothered by any of these rules.

For the big ISPs, this is a harsh reminder that the regulatory pendulum always swings back. It’s not hard to envision celebration behind the scenes at the big ISPs when they convinced the FCC to give them everything on their wish list. But when regulations get out of balance, there is inevitably pushback in the other direction.

There is still one piece of unfinished business in this case. There is still an open issue in the court examining if the California law impinges on interstate commerce. But the Ninth Circuit’s ruling made it clear that California is free to enforce its version of net neutrality within the state.

Businesses Rely on Broadband

I don’t think most folks understand the extent to which businesses are adapting to broadband. My firm interviews businesses all over the country, and there is a drastic difference between the ways that businesses with and without good broadband operate today.

One of the best examples I can give you is to talk about a specific business. It’s a casual bar/restaurant that attracts customers by offering good food and arcade games for customers. The business is not part of a big chain and was created and is operated by the owner. A customer might spend an evening at the business and not have any clue about the extent to which this business uses broadband. But consider the following ways this one local business uses broadband:

  • Customers make reservations using a service that is hosted in the cloud. The business does not keep a local reservation book and is completely reliant on the reservation service to know who will be showing up for the evening. The reservation service provides updates to the owner so that he is aware of heavily-booked days so that he can make sure there are enough employees on hand.
  • Most of the food and drinks to supply the kitchen and the bar are ordered using online vendor portals. The owner rarely has to talk to vendor salespeople and rarely has to go shopping for supplies.
  • The software running the games is located in the cloud. If the broadband connection dies, the games instantly go dead. The owner says one of the coolest features of the cloud software is that customers can see how they scored on a given game in past visits – and people will try to beat their own best scores.
  • The merchant services software that accepts and processes credit cards is hosted in the cloud. The business uses touchscreen terminals for customers to pay their bills and enter tips.
  • Payroll is totally in the cloud. Employees log in when they come and go for the day, and payroll is calculated automatically. The merchant services software also processes tips directly to each waitperson.
  • Accounting for sales is in the cloud. All food, bar, and game sales are automatically added to the accounting books.
  • The background music in the restaurant comes from a cloud service.
  • The business has a voice over IP telephone that only works when the broadband is functioning.
  • There are security cameras inside and outside the business to keep a record of who comes and goes. The cameras are tied into a burglar alarm service hosted in the cloud.
  • The restaurant is active on social media and posts comments and pictures throughout the day.
  • The owner keeps a backup copy of all accounting and other key records in the cloud.
  • One of the biggest uses of bandwidth comes from providing free WiFi for patrons. At business times that can accumulate to a lot of bandwidth.

The owner of the business fully understands the degree to which the business is reliant on broadband. To protect against outages, the owner always bought a broadband connection from two different ISPs. Unfortunately, when there was storm damage, it turned out that both ISPs were on the same physical route, and the business had to shut down for a day. The owner changed to a different ISP that uses a different physical path from the business.

I’m not highlighting this business because it is extraordinary – just the opposite. This is a business that is using the tools that are available to any business with broadband. There are now millions of businesses that are fully reliant on broadband to function, and that’s something we don’t talk about enough.

One interesting thing I’ve found in talking to businesses that don’t have good broadband is that they usually have only a short list of functions that could be done better if they could buy faster broadband. I’m not surprised about that because such businesses can’t imagine the changes to their daily work life that would come from fully integrating broadband into their business.

Jargon

There is a good chance that if you are reading this blog that you are well versed in a fair amount of telecom industry jargon. I do my best in writing this blog to stay from as much jargon as possible, but it’s not easy. Jargon is shorthand, and it lets folks already in the industry talk about topics without having to explain basic concepts every time they arise.

Every segment of the industry has its own jargon. Wireless folks know what’s meant when a colleague talks about MIMO, QAM, and RAN. Fiber folks understand what is meant by OLT, jitter, and backscattering. Cable company folk can talk about DAA, CMTS, and DOCSIS. The folks that finance broadband networks talk about yield, basis points, and acid test. Regulators all know what is meant by NARUC, NOI, and CPNI.

But I challenge any industry folks reading this blog – go look out your front door and ask yourself how many of your neighbors know what DOCSIS or XGS-PON means. How many know what you mean if you refer to NRTC or WISPA – or even that those are shorthand for organizations?

It’s hard to avoid using jargon. It’s nearly impossible to talk to a network engineer about the performance of a network without going quickly into jargon. It’s challenging to read an FCC order if you don’t know the regulatory jargon. You better understand the banker jargon before agreeing to new loan for a network.

But jargon can quickly get in the way when we want to communicate with somebody who doesn’t know our shorthand. As an example, I recently sat through a presentation by a water engineer with a City Council. This engineer used jargon throughout, and I could tell that the elected officials weren’t following the nuances of what he was talking about. I would hope that after the presentation that somebody explained the presentation to the elected officials – but since this engineer couldn’t describe his concept in plain English, most of the points he was making went over everybody’s heads.

Most folks assume everybody in the industry understands their jargon, but I know this isn’t so. Just listen to the way that a field technician and a customer service representative answer the same question from a customer – they are likely to use very different words.

I try my best to keep jargon down in this blog, but sometimes it’s almost impossible to do. It’s hard to write a 700-word piece and make a point if you have to explain each technical concept. I have to laugh when I get comments on a blog from a technician who is sure that I don’t know what I’m talking about when I try to summarize technical terms into plain language and use analogies to explain a concept. I can just hear them sputtering that I’m not being precise enough.

But this blog is a reminder to industry folks that we need to take a step back from jargon if we want folks to understand us. I can promise you that in a meeting of telecom folks that there will be attendees who don’t know what some of the jargon means, but are too embarrassed to say so. Jargon can be a total roadblock when trying to explain broadband to non-industry folks.

I had a college English teacher that told me something that has always stuck with me. She said that a good writer should be able to make any concept understandable to their grandmother. This doesn’t mean you have to write or speak without jargon if your target audience are folks who understand the jargon. But it means that communication can easily fail if you can’t explain things in a way that a listener will understand.

Replacing Poles

When folks ask me for an estimate of the cost of building aerial fiber, I always say that the cost is dependent upon the amount of required make-ready needed. Make-ready is well-named – it’s any work that must be done on poles to be ready to string the new fiber.

One of the most expensive aspects of make-ready comes from having to replace existing poles. Poles need to be replaced before adding a new fiber line for several reasons:

  • The original pole is too short, and there is not space to add another wire without upgrading to a taller pole. National electric standards require specific distances between different wires for technician safety when working on a pole.
  • It’s possible that the new wire will add enough expected wind resistance during storms that the existing pole is not strong enough to take on an additional wire.
  • One of the most common reasons for replacing poles is that the poles are worn out and won’t last much longer. That’s what the rest of the blog discusses.

Poles don’t last forever. The average wooden utility pole has an expected life of 45 to 50 years. This can differ by the locality, with poles lasting longer in the desert where there are no storms and having a shorter life in more challenging environments. It’s easy to think of poles as being strong and hard to damage, but the forces of nature can create a lot of stress on a pole. The biggest stress on most poles comes from the cumulative effect of heavy winds or ice pulling on the wires and attachments.

There are a lot of reasons why poles fails:

  • Although most poles are usually made of rot-resistant wood, the protection eventually wears off, and poles can decay. This can be made worse if vegetation has been allowed to grow onto a pole.
  • Using a pole differently than the way it was designed is common. A pole might have been rated to carry utility wires but over time got loaded with extra attachments like electric transformers, streetlights, or cellular electronics.
  • The soil around the base of a pole can change over the decades. The area may now be subject to flooding and erosion that wasn’t anticipated when the pole was built.
  • Somebody might have removed a guide wire that was supporting the pole and not replaced it.
  • A pole may have been hit by a car, but not badly enough to be replaced.

ISPs complain when saddled with the full cost of pole replacement. Many of the issues described above should more rightfully be borne by the pole owner. But the federal and most state make-ready rules put the entire cost burden of a pole replacement on the new attacher. It is clearly not fair to make a new attacher pay the full cost to replace a pole that was already in less than ideal condition.

It may seem to the general public that poles are just stuck into the ground. But if you’ve ever watched a new pole being placed, you’ll know that the process can be complex. The design of any new pole must account for all of the anticipated stresses the pole will have to endure. This includes the weight of the wires in a windstorm, ice accumulation, soil composition, the quality of neighboring poles, the spacing between poles (the greater the spacing, the more weight and wind resistance), and if the pole is standalone or to be guyed (anchored to the ground with several strong supporting cables).

Most engineers estimate that a generic aerial construction project will require replacing around 10% of the poles. It’s a pleasant surprise when the percentage is smaller but it can be a real sticker shock if a lot of poles must be replaced. I’ve seen projects where an electric company has neglected maintenance and most of the poles were inadequate.

The right question to ask is not how much it costs to build a mile of fiber. The better question to ask is how good are the poles?

Converged Networks

I’ve been reading and thinking about converged networks – networks that are enabled to tackle multiple market segments. The best example of this is the largest cable companies that are using their residential last-mile broadband networks to support the cellular business.

The cellular business is a perfect fit for a cable company. They already have fiber deep into every neighborhood, which makes it easy to strategically locate small cell sites without building additional fiber. The big cable companies have put a lot of effort into WiFi which can save money by capturing a lot of cellular backhaul traffic from customer phones.

Having the ability to leverage the existing network also gives cable companies a lot of flexibility. They can continue to buy wholesale cellular minutes in areas where the cell traffic volume is light and use their own cellular network where customer usage is high. This is a cost advantage over the cellular companies that must provide their networks everywhere.

It’s an interesting dynamic. I think the cable companies got into the cellular business as a way to increase customer stickiness – meaning making it harder for customers to leave them. The cable companies will only sell cellular to customers who buy their broadband, meaning that a customer that wants a new ISP must also change to a new cellular provider. But now that cable companies are gathering a mass of customers, I have to think they are now looking at cellular as a big profit opportunity.

To a lesser degree, large cellular companies are building a converged network when they are using excess capacity on the cellular network to provide FWA home broadband. This has obviously been a winning strategy in the last year when Verizon and T-Mobile were the only two ISPs with big growth.

But as I look at the long-term outlook for FWA, this doesn’t seem like as strong of a converged strategy as what the cable companies are doing. To me, the difference is in the capability of the two networks. A cable company’s last-mile network can absorb cellular backhaul from customers with barely a blip in network performance. But the same can’t be said for cell sites. It’s far easier for cell sites to reach capacity, and cellular companies have made it clear that they will prioritize cellular data over FWA broadband performance. Maybe cellular carriers can solve this problem by eventually fully implementing the 5G specifications. But for now, cable company networks can handle convergence much more easily than cellular networks.

I have been wondering why fiber providers have not made the same push for convergence. The one exception might be Verizon, which has said in recent years that it now considers all arms of its business when building fiber assets. In the past, the company treated its fiber Fios business, the cellular business, and the CLEC business as arms-length businesses. From what I can tell, Verizon is still not as converged into what the cable companies are heading for – but there might be a lot more of that going on behind the scenes that we don’t know about.

I’m surprised that nobody has tried to integrate the cellular business for small fiber providers. There is a pretty decent list of fiber providers today that have between 100,000 and 1 million customers – and most of them are growing rapidly. It would be a major challenge for a single ISP with a few hundred thousand customers to launch the same kind of MVNO cellular operation that has been done by Comcast and Charter. But it seems like there ought to be a business plan for fiber ISPs to collectively tackle the cellular business. A last-mile fiber company can bring all of the same benefits to an integrated cellular business as the cable companies and are only lacking economy of scale.

I can think of a few reasons nobody has made this work. Taking time to consider cellular is a major distraction for a fiber ISP that is building fiber passings as quickly as possible. There is also getting the many mid-sized fiber providers to trust each other enough to be partners. But at some point in the future, it’s hard to think that somebody won’t figure this out.

If fiber ISPs enter the cellular business, broadband becomes a truly converged market where cable companies, cellular companies, and independent fiber providers compete with the same suite of products. I know that’s what the public wants because it breaks some of the monopolies and increases choice. My crystal ball says we will get there – I’m just fuzzy about how long it will take.