Anything but Maps

I guess this is the time of the year when governments start thinking about what should be put into next year’s budget. I’ve been party to at least three conversations over the last few weeks talking about how a budget priority ought to be to develop better mapping for broadband. These conversations make me cringe, because I think that maps ought to be the last priority – I have yet to see maps produce anything useful.

There are two kinds of maps. One is a map of broadband speeds. I’ve written about this a number of times. As long as the data in these maps is provided by ISPs the data will be highly suspect and nearly useless. No ISP is going to admit to having poor broadband on a map if their public relations posture is that they offer great broadband speeds. For technologies like DSL, the amount of broadband available can literally vary by customer with two neighbors with different speeds due to local issues with the copper. Even the idea of letting households report their speeds won’t work since users might have slow speeds due to non-network issues such as poor inside wiring or the use of an obsolete WiFi router. I know one of the stated purposes of these maps is to help spur politicians to fund broadband solutions, but since these maps overstate broadband coverage they probably do more harm than good.

The other kind of map, and the one I heard discussed recently, is one that shows the location of all of the fiber in a state or a county. On the surface this sounds like a good idea, because who wouldn’t want to know where somebody has already built fiber? But in practical terms this usually turns out to be more of an effort to identify where you can’t connect to fiber, because a huge portion of existing fiber in any community is off limits to new fiber ventures. I often hear the lament. Consider some of the following:

  • There are a number of entities that are not going to tell you the specific location of their fiber unless there is somehow an inescapable law forcing them to do so. Electric companies rarely share fibers and don’t want to show specific fiber assets because of concern for the safety of the electric grid. Cable companies almost never let a competitor share their existing fiber. Telcos are generally willing to sell expensive special access circuits anywhere they have fiber, but because of security concerns don’t like sharing their detailed maps.
  • Fibers can be off limits for other reasons. One of the most aggravating situations is fiber funded by a state or other government entity that cannot be shared with others. I know of several states that have extensive gigabit networks to anchor institutions, but which prohibit ISPs and even local governments from sharing the fiber. This is sometimes due to a state law which prohibits the government from using their assets to benefit non-state ventures, but often these prohibitions are due to lobbying during the funding process by the big ISPs who don’t want competition.
  • Fiber varies in condition. Many fibers built decades ago are in bad shape if they weren’t installed and maintained properly. Neglected conduits can fill up with dirt over time and become unusable. Fibers can be dead because a technician snipped a fiber somewhere in the network and didn’t resplice.
  • Fiber without access points can be worthless. It doesn’t do any good to get access to a fiber if the only place you can access it is miles from where you want to use it. Fiber owners are leery about creating new access points on existing fiber routes. The construction process of getting such access can accidently cut the fiber. They also know that adding any new splices to a fiber adds degradation and reduces the eventual life o the fiber. This means that in many cases, even when fiber can be shared, it can only be done so with terms dictated by the fiber owner.
  • Fiber owners rarely let outsiders have physical access to the fiber, meaning that a new connector must pay the owner for the labor to get access. I’ve seen fiber owners not give access due only to the fact that they don’t have the spare labor force in an area to support anything but their own needs.
  • Long haul fibers often are just passing through. I worked with a city that was right next to a major fiber route along an interstate that connected two major cities. They were amazed to find out that no carrier on that fiber was willing to serve them. The carriers only wanted to sell fiber routes between the two big markets and were not willing to kill a lucrative fiber to serve one smaller customer.

Spending the money to create a map of existing fiber is mostly a fool’s errand. Many of the fiber owners won’t cooperate. Even when they do, they are unlikely to provide details about where they might or might not allow access to the fiber – something they often only determine when they get a specific request.

Unfortunately, mapping sounds like a logical thing to do and it’s something that politicians can latch only to show they care about broadband shortages. I’ve repeatedly heard the argument that we can’t start to solve the broadband issue until we know what we already have in place. The reality is that it’s nearly impossible to create a meaningful map, and even should you do so it’s not going to really show fiber that is actually available for use. My advice is to use scarce broadband funds for anything else but mapping. There are plenty of unscrupulous consultants who will take money to create maps that they know will be worthless.

Broadband Advocates

I’m writing this blog while sitting in a meeting of NCHeartsGigabit, a non-profit started in North Carolina to promote the expansion of broadband. The group started five or six years ago as an informal group of folks who were interested in expanding broadband coverage around North Carolina. A few years ago they realized that they needed to move from talking to action and created a non-profit organization that now brings together the various broadband stakeholders to look for broadband solutions.

Today’s meeting is a great example of the progress they’ve made. There is a wide range of attendees representing state and local government, telco cooperatives and ISPs, bankers, foundations, equipment vendors, consultants and engineers. Most impressive is that they attracted both current Governor Roy Cooper and former Governor James B. Hunt to speak to the group. I think their presence highlights the importance that broadband coverage is now getting in this and other states. North Carolina is like the majority of states where there are some pockets of fiber-to-the-home, cities served by the big cable company networks, a smattering of rural areas served well by small telcos and cooperatives, and much of the rural parts of the state with poor or nonexistent broadband.

Sitting in this meeting reminds me how important it is to have what I call broadband advocates – folks like NCHeartsGigabit who have taken it as a mission to promote broadband. I’ve written many blogs about why broadband is vital for rural America and these are folks who get it.

I work around the country in communities of all sizes and I regularly interface with broadband advocates. Sometimes these groups are formal like a broadband committee that is empowered by the local government. I recently worked with such a group in Davis, California and it is one of the most knowledgeable and engaged advocacy groups I have ever worked with. I can tell that this group, which is also backed by widespread citizen support is going to hold the city’s feet to the fire on broadband issues.

Sometimes there is no formal group, but instead the public acts in mass to make their voices heard on the issue. As an example, I was at a public meeting in Pope County, Minnesota last year to give the findings from a broadband feasibility study. This is the most sparsely populated county in the state and there was little broadband outside of the county seat. The public meeting was standing-room only and the county officials heard story after story about how lack of broadband was affecting people’s lives. The County officials heard this message and have since provided funding in a public private partnership with a telco cooperative to bring broadband to the County.

The more common situation is that there only a few broadband advocates in a community who push for broadband. If these few broadband champions are persistence enough they can sometimes finally pull the rest of the community along. The best example of this I can think of is my friend Mark Ericsson who was the one-man force behind bringing broadband to Renville and Sibley Counties in Minnesota. He went to hundreds of local meetings and eventually got a lot of other volunteer help, but without his early persistence this project would have died in the early days.

His success is memorable because it is rare. Bringing fiber to a rural area requires a huge amount of effort. It means convincing politicians to support the idea. It means raising the money needed for doing the feasibility analysis. It means raising even more money for customer education and marketing and in many places a referendum. It takes yet more money to raise the funding. And unless a community wants to be an ISP it means finding an ISP partner to operate the business. More often than not, a community with only a few advocates can’t make it through this daunting gauntlet of tasks.

This is why I always recommend that communities with poor broadband make a push early to involve as much of the community as possible finding a solution. I don’t understand the sociology of why it works, but I know from practical experience that unleashing a group of broadband advocates often creates momentum that is hard to stop. Households in rural counties generally want broadband badly enough that many of them will agree to play some role in getting a broadband network. If a community really wants broadband, my first advice is to create the advocacy group first and then get out of their way.

Using Gigabit Broadband

Mozilla recently awarded $280,000 in grants from its Gigabit Communities Fund to projects that are finding beneficial uses of gigabit broadband. This is the latest set of grants and the company has awarded more than $1.2 million to over 90 projects in the last six years. For any of you not aware of Mozilla, they offer a range of open standard software that promotes privacy. I’ve been using their Firefox web browser and operating software for years. As an avid reader of web articles I daily use their Pocket app for tracking the things I’ve read online.

The grants this year went to projects in five cities: Lafayette, LA; Eugene, OR; Chattanooga, TN; Austin, TX; and Kansas City. Grants ranged from $10,000 to $30,000. At least four of those cities are familiar names. Lafayette and Chattanooga are two of the largest municipally-owned fiber networks. Austin and Kansas City have fiber provided by Google Fiber. Eugene is a newer name among fiber communities and is in the process of constructing an open access wholesale network, starting in the downtown area.

I’m not going to recite the list of projects and a synopsis of them is on the Mozilla blog. The awards this year have a common theme of promoting the use of broadband for education. The awards were given mostly to school districts and non-profits, although for-profit companies are also eligible for the grants.

The other thing these projects have in common is that they are developing real-world applications that require robust broadband. For example, several of the projects involve using virtual reality. There is a project that brings virtual reality to several museums and another that shows how soil erosion from rising waters and sediment mismanagement has driven the Biloxi-Chitimacha-Choctaw band of Indians from the Isle de Jean Charles in Louisiana.

I clearly remember getting my first DSL connection at my house after spending a decade on dial-up. I got a self-installed DSL kit from Verizon and it was an amazing feeling when I connected it. That DSL connection provided roughly 1 Mbps, which was 20 to 30 times faster than dial-up. That speed increase freed me up to finally use the Internet to read articles, view pictures and shop without waiting forever for each web site to load. I no longer had to download software updates at bedtime and hope that the dial-up connection didn’t crap out.

I remember when Google Fiber first announced they were going to build gigabit networks for households. Gigabit broadband brings that same experience. When Google Fiber announced the gigabit fiber product most cable networks had maximum speeds of perhaps 30 Mbps – and Google was bringing more than a 30-times increase in speed.

Almost immediately we heard from the big ISPs who denigrated the idea saying that nobody needs gigabit bandwidth and that this was a gimmick. Remember that at that time the CEO of almost every major ISP was on the record saying that they provided more than enough broadband to households – when it was clear to users that they didn’t.

Interestingly, since the Google Fiber announcement the big cable companies have decided to upgrade their own networks to gigabit speeds and ISPs like AT&T and Verizon rarely talk about broadband without mentioning gigabit. Google Fiber reset the conversation about broadband and the rest of the industry has been forced to pay heed.

The projects being funded by Mozilla are just a few of the many ways that we are finding applications that need bigger broadband. I travel to communities all over the country and in the last year I have noticed a big shift in the way that people talk about their home broadband. In the past people would always comment that they seemed to have (or not have) enough broadband speed to stream video. But now, most conversations about broadband hit on the topic of using multiple broadband applications at the same time. That’s because this is the new norm. People want broadband connections that can connect to multiple video streams simultaneously while also supporting VoIP, online schoolwork, gaming and other bandwidth-hungry applications. I now routinely hear people talking about how their 25 Mbps connection is no longer adequate to support their household – a conversation I rarely heard as recently as a few years ago.

We are not going to all grow into needing gigabit speeds for a while. But the same was true of my first DSL connection. I had that connection for over a decade, and during that time my DSL got upgraded once to 6 Mbps. But even that eventually felt slow and a few years later I was the first one in my area using the new Verizon FiOS and a 100 Mbps connection on fiber. ISPs are finally facing up to the fact that households are expecting a lot of broadband speed. The responsive ISPs are responding to this demand, while some bury their heads in the sand and try to convince people that their slower broadband speeds are still all that people need.

When Big ISPs Fail

It’s obvious from reading the press that Frontier Communications is in trouble. The company visibly bungled the integration of the properties most recently purchased from Verizon, including some FiOS properties. The company was already experiencing customer losses, which have accelerated in the last year. Frontier is already looking to raise cash by finding a buyer for some of the properties they just purchased from Verizon.

I have no idea if Frontier is going to declare bankruptcy or fail. Watching them struggle, though, brings back memories of other big telcos that have struggled badly in the past. We’ve seen this scenario enough times to understand what poor performance will mean.

Not every telco that has struggled has gone through bankruptcy. Probably the best example of a company that almost went under, but which instead struggled for years was Qwest, which is now owned by CenturyLink. Within a few years after Qwest took over U.S. West the company fell on hard times. The company carried too much debt, and they didn’t do as well as expected in the long-line transport business that Qwest brought into the newly formed venture. The company was even fined $250 million by the Security and Exchange Commission for shady deals made with Enron’s broadband business.

We saw the consequences of Qwest’s financial struggles. They company had little money for capital and let the copper plant deteriorate a lot faster than would be expected. There were widespread reports of rural outages that were repeatedly patched rather than fixed while the company focused its limited resources on the major urban markets. Qwest lost huge numbers of broadband customers to the cable companies and also got clobbered in enterprise sales.

We saw something similar with Charter Communications. The company filed for bankruptcy protection in 2009. They pared back on capital spending and went for a number of years without making the upgrades we saw from Comcast, Cox and Mediacom. Much of the company’s footprint was stuck with first generation cable modems with slow broadband speeds.

Frontier looks to on a similar path to Fairpoint Communications after they purchased Verizon properties. Fairpoint took on massive debt to buy the New England properties from Verizon and struggled after adding 1.4 million customers to a relatively small company. Within two years after the purchase Fairpoint went through bankruptcy reorganization and continued to struggle since then due to lack of cash. They were recently purchased by Consolidated Communications.

What we’ve most learned from big ISPs that struggle is that the customers pay the price. All of these companies dealt with cash shortages by reducing staff and slashing capital expenditures. I remember Qwest staffing being reduced so much that there were entire rural counties that had only one Qwest technician. Qwest shuttered local business offices and lost the local touch in communities. Customers reported major delays in getting installations and repairs, with many reports of problems that were never solved.

We saw from Qwest and Charter that the first thing that goes in tight times is upgrades of technology. When those companies got into trouble they froze technology investment and innovation during a time when broadband speeds were climbing everywhere else.

The struggles of the big ISP invited in competition and many communities served by Qwest and Charter saw competitors build new networks. I know of some towns where the new competitors got practically every customer, showing how fed up customers were with being neglected by their big ISP. Unfortunately, the majority of communities served by such ISPs saw no competition and suffered with poor service.

Sometimes companies that struggle eventually right the ship. We see Charter now making upgrades that are a decade or more late. CenturyLink is under new management and is trying hard to make things better, but still doesn’t have enough capital to fix decades of neglect to the network. CenturyLink even got more than a billion dollar subsidy through the CAF II program to try to revitalize old rural copper. We’re going to have to wait to see if these big ISPs can make enough amends for communities to forgive them for decades of neglect.

My guess is that Frontier is not going to get the chance to reinvent themselves. They are struggling at a time when most of their rural communities are screaming for better broadband. It’s hard to imagine them somehow fixing their many problems.

Carrier-of-Last-Resort Obligations

Earlier this month the U.S. District Court for the District of Columbia upheld FCC orders that still require large telcos to be the carrier-of-last-resort provider of telephone service for at least some of their service territory. The ruling is the result of appeals made by CenturyLink and AT&T that required them to provide telephone service to new rural households.

The idea of carrier of last resort has been part of the telephone industry for nearly as long as the FCC has been regulating the industry. The concept was a key component of spreading the telephone network to all corners of the country – the Congress and the early FCC understood that the whole country was better off if everybody was connected.

Over the years the FCC and various state regulatory commissions ruled that telcos had to make a reasonable effort to connect rural customers. Telcos always had the option to petition against adding customers in really hard to reach places like mountaintops, but for the most part telcos routinely added new homes to the telephone network.

Carrier-of-last-resort started to weaken with the introduction of competition from the Telecommunications Act of 1996. Since that time the big telcos have been able to walk away from carrier-of-last-resort obligations in most of their territory. This court order ruled that in areas where the telcos are still receiving federal high cost support that the telcos are still obligated to connect homes that request service.

I worked for Ma Bell pre-divestiture and there was a real pride in the telephone industry that the network reached out to everybody. Telcos then also deployed huge numbers of pay telephones throughout the network to reach those that couldn’t afford phone service – even though they lost money on many of the payphones. The Bell company and the smaller independent telcos made it their mission to extend the network to everybody.

This order made a few comments, though, that puzzled me. They point out that many of the high-cost areas served by the big telcos are up for new funding from the upcoming CAF II auctions. Any winners of that auction are required to file to become the Eligible Telecommunications Carrier (ETC) for any areas they receive funding. The discussion in the court order implies that these new ETCs will become the carrier-of-last-resort in these areas.

That surprised me because there are plenty of carriers that have ETC status and yet are not the obligated carrier-of-last-resort. The best example is the same big telcos examined in this case who are the ETC of record for their whole footprints but now only have carrier-of-last-resort obligations for the last most rural areas covered by this case. There have been stories for years of people who built new homes, even in urban areas, and are refused service by both the telco and cable company. The cable companies have no carrier-of-last-resort obligations, but it’s clear that in many places the telcos have been able to walk away from the obligation.

I think that companies seeking the CAF II reverse auction funding might be surprised by this interpretation of the rules. Being carrier-of-last-resort means that a carrier is obligated to build to reach anybody in the covered area that requests telephone service. The reverse auction doesn’t even require total coverage of the covered census blocks and that seems to be in conflict with the court’s interpretation. The reverse auction census blocks are some of the most sparsely populated areas of the country and building to even one remote customer in some of these areas could be extremely expensive.

Unfortunately, the carrier-of-last-resort obligation only applies to telephone service and not to broadband. It would be nice to see this concept applied to broadband and the FCC missed a good opportunity to do this when they handed out billions of federal dollars in the CAF II plan. With that plan the big telcos are only required to make their best effort to reach customers with broadband in the areas that got the CAF II funding – I’m hearing from rural people all over the country that a lot of the CAF II areas aren’t seeing any upgrades. For the most part the idea of carrier-of-last resort and universal coverage are becoming quaint concepts of our past.

The Price for Triple Play?

I was recently working on a project for client who is thinking about competing in a new city and wants to understand the real market rates customers are paying. We solicited copies of bills from existing subscribers to the incumbent telco and cable company to find out.

I doubt that anybody would be surprised from what we found, but it was good to be reminded of the billing practices of the big ISPs. Here are a few of the things we found:

  • Both incumbents use promotional rates to provide lower prices to new customers or to existing customers who are willing to negotiate and to sign up for a term contract. Promotional discounts were all over the board and seems to mostly range between a $5 and a $25 discount per month. But there was one customers who was getting a $60 discount on a $180 monthly bill.
  • Both incumbents also offer bundling discounts, but they were applied erratically. Our sample of bills was not a statistically valid sample, but roughly half of the bills we saw had a bundled discount while other customers buying the same products were not getting a discount.
  • The cable incumbent offers the typical three tiers of service offered by most cable companies. While every cable customer had one of these three packages, we surprisingly didn’t see any two customers paying the same price.
  • The cable company had programming fees that were separate from the base programming charges – one fee to cover local programming costs and another labeled as a sport fee. These were not always billed at the same rate and there were not being billed to all customers with the same packages.
  • There was also a varying range of fees for settop boxes and cable modems by the cable company and WiFi modems from the telco.
  • What surprised me most was how widely the taxes varied from bill to bill. Customers with the same products often had tax charges several dollars apart. This makes me wonder why more taxing authorities aren’t auditing bills from time to time to see if all of the tax due to them is even being billed.
  • Nowhere on the bills was any customer told the speed of their broadband products.
  • There were obvious billing errors. For example, I saw a bill charging the subscriber line charge to somebody who doesn’t have a telephone line. They probably had one in the past and are still paying $6.50 per month long after they dropped their landline.

I hadn’t looked at that many customer bills from a single market for a while. I’ve always known that prices vary by customers, but I didn’t expect them to vary this much. My primary take-away from this analysis is that there is no one price for telecom products. I hear clients all of the time saying things like “My primary competition comes from a $49 broadband connection from the cable company”. But that’s not really true if most people are paying something different than $49. Some customers have discounts to lower that price while others may be paying more after considering ancillary fees.

The bills were confusing, even to me who knows what to look for. It would be easy, for example, for a customer to think that a local programming fee or an FCC line charge are taxes rather than revenue that is kept by the service provider. Both ISPs mixed these fees on the bill with actual taxes to make it impossible for the average customer to distinguish between a tax and a fee that is just a piece of a product billed under a different name.

These bills also made me wonder if the corporate staff of these big ISPs realize the wide range that customers are paying. In many cases there were fees that could have been billed that weren’t. And there was a a wide variance tax billing that would make a corporate CFO cringe.

These bills reinforce the advice I always give to clients. I think customers like transparency and I think the best bill is one that informs customers about what they are buying. In this market most customers could not tell you what they are paying for the various products. Bills can be simple, yet informative and some of my clients have wonderful bills. After seeing the billing mess from these two big ISPs, I think honest straightforward billing is another advantage for a competitor.

Metering Broadband

A lot of the controversy about Comcast data caps disappeared last year when they raised the monthly threshold for data caps from 300 gigabytes to 1 terabyte. But lately I’ve been seeing folks complaining about being charged for exceeding the 1 TB data cap – so Comcast is still enforcing their data caps rules.

In order to enforce a data cap an ISP has to somehow meter the usage. It appears that in a lot of cases ISPs do a lousy job of measuring usage. Not all ISPs have data caps. The biggest ISPs that have them include Comcast, AT&T, CenturyLink for DSL, Cox and Mediacom. But even these ISPs don’t enforce data caps everywhere, like Comcast not enforcing them where they compete directly against Verizon FiOS.

Many customer home routers can measure usage and there are reports of cases where Comcast data usage measurements are massively different than what is being seen at the home. For example, there are customers who have seen big spikes in data measurement from Comcast at a time when their routers were disconnected or when power was out to the home. There are many customers who claim the Comcast readings always greatly exceed what they are seeing at their home routers.

Data caps matter because customer that exceed the caps get charged a fee. Comcast charges $10 for each 50 GB of monthly over the cap. Mediacom has the same fees, but with much smaller data caps such as a 150 GB monthly cap on customers with a 60 Mbps product.

It’s not hard to imagine homes now exceeding the Comcast data cap limit. Before I left Comcast a year ago they said that my family of three was using 600 – 700 GB per month. Since I didn’t measure my own usage I have no idea if their numbers were inflated. If my measurements were accurate it’s not hard to imagine somebody with several kids at home exceeding the 1 TB. The ISPs claim that only a small percentage of customers hit the data cap limits – but in world where data usage keep growing exponentially each year there are more homes that will hit the limit as time goes by.

What I find interesting is that there is zero regulation of the ISP data ‘meters’. Every other kind of meter that is used as a way to bill customers are regulated. Utilities selling water, electric or natural gas must use meters that are certified to be accurate. Meters on gas pumps are checked regularly for accuracy.

But there is nobody monitoring the ISPs and the way they are measuring data usage. The FCC effectively washed their hands from regulating ISPs for anything broadband when they killed Title II regulation of broadband. Theoretically the Federal Trade Commission could tackle the issue, but they are not required to do so. They regulate interactions with customers in all industries and can select the cases they want to pursue.

There are a few obvious reasons why the readings from an ISP would differ from a home, even under ideal conditions. ISPs measure usage at their network hub while a customer measurement happens at the home. There are always packets lost in the network due to interference or noise on the network, particularly with older copper and coaxial networks. The ISP would be counting all data passing through the hub as usage although many of the packets never make it to customers. But when you read some of the horror stories where homes that don’t watch video see daily readings from Comcast of over 100 GB in usage you know that there is something wrong in the way that Comcast is measuring usage. It has to be a daunting task to measure the usage directed for thousands of users simultaneously and obviously Comcast has problems in their measurement algorithms.

I’ve written about data caps before. It’s obvious that the caps are just a way for ISPs to charge more money, and it’s a gigantic amount of extra revenue if Comcast can bill $10 per month extra to only a few percent of their 23 million customers. Anybody that understand the math behind the cost of broadband understands that a $10 extra charge for 50 GB of usage is almost 100% profit. It doesn’t cost the ISP anything close to $10 for the connections for the first terabyte let alone an incrementally small additional amount. And there certainly is no cost at all if the Comcast meters are billing for phantom usage.

I don’t know that there is any fix for this. However, it’s clear that every customer being charged for exceeding data caps will switch to a new ISP at the first opportunity. The big ISPs wonder why many of their customers loathe them, and this is just one more way for a big ISP to antagonize their customers. It’s why every ISP that builds a fiber network to compete against a big cable companies understand that they will almost automatically get 30% of the market due to customers who have come to hate their cable ISP.

Fiber Electronics and International Politics

In February six us Intelligence agencies warned Americans against using cellphones made by Huawei, a Chinese manufacturer. They warned that the company is “beholden” to the Chinese government and that we shouldn’t trust their electronics.

Recently Sen Liz Cheney introduced a bill into Congress that would prohibit the US Government or any contractors working for it to use electronics from Huawei or from another Chinese company ZTE Corp. Additionally, any US military base would be prohibited from using any telecom provider who has equipment from these two vendors anywhere in their network.

For anybody who doesn’t know these two companies, they manufacture a wide array of telecom gear. ZTE is one of the five largest cellphone makers in the world. They also make electronics for cellular networks, FTTP networks and long-haul fiber electronics. The company sells under it’s own name, but also OEMs equipment for a number of other vendors. That might make it hard for a carrier to know if they have gear originally manufactured by the company.

Huawei is even larger and is the largest maker of telecom electronics in the world, having passed Ericsson a decade ago. The company’s founder has close ties to the Chinese government and their electronics have been used to build much of the huge wireless and FTTP networks in China. The company makes cellphones, FTTP equipment and also is an innovator in equipment that can be used to upgrade cable HFC network.

This is not the first time that there has been questions about the security of electronics. In 2014 Edward Snowden released documents that showed that the NSA had been planting backdoor software into Cisco routers being exported overseas from the US and that these backdoors could be used to monitor internet usage and emails passing through the routers. Cisco says that they had no idea that this practice was occurring and that it was being added to their equipment after it left their control.

Huawei and ZTE Corp also say that they are not monitoring users of their equipment. I would assume that the NSA and FBI have some evidence that at least the cellphones from these companies can be used to somehow monitor customers.

It must be hard to be a telecom company somewhere outside of the US and China because our two countries make much of the telecom gear in wide use. I have to wonder what a carrier in South America or Africa thinks about these accusations.

I have clients who have purchased electronics from these two Chinese companies. In the FTTP arena the two companies have highly competitive pricing, which is attractive to smaller ISPs updating their networks to fiber. Huawei also offers several upgrade solutions for HFC cable networks that are far less expensive than the handful of other vendors offering solutions.

The announcements by the US government creates a quandary for anybody who has already put this gear into their network. At least for now the potential problems from using this equipment have not been specifically identified. So a network owner has no way of knowing if the problem is only with cellphones, if it applies to everything made by these companies, or even if there is a political nature to these warnings rather than a technical one.

Any small carrier using this equipment likely cannot afford to remove and replace electronics from these companies in their networks. The folks I know using ZTE FTTP gear speak high praises of the ease of using the electronics – which makes sense since these two companies have far more installed fiber customers worldwide than any other manufacturer.

Somebody with this equipment in their network has several quandaries. Do they continue to complete networks that already use this gear or should they somehow introduce a second vendor into their network – an expensive undertaking. Do they owe any warnings to their own customers (at the risk of losing customers). Do they do anything at all?

For now all that is in place is a warning from US intelligence agencies not to use the gear, but there is no prohibition from doing so. And even should the Senate bill pass it would only prohibit ISPs using the gear from providing telecom services to military bases – a business line that is largely handled by the big telcos with nationwide government contracts.

I have no advice to give clients on this other than to strongly consider not choosing these vendors for future projects. If the gear is as bad as it’s being made to sound then it’s hard to understand why the US government wouldn’t ban it rather than just warn about it. I can’t help but wonder how much of this is international wrangling over trade rather than any specific threat or risk.

The Seasonality Dilemma

One issue that I often see neglected in looking at financial projections for potential fiber projects is seasonality. Seasonality is the term used among utilities to describe groups of customers who are not full-time residents.

There are a lot more kinds of seasonal customers than many people realize. Consider the following:

  • Tourists areas are the ones most used to this idea. While most tourists areas get busy in the summer there are also ski towns that are busy only in the winter. These communities are now finding that those that visit or have seasonal homes in these communities expect to have broadband.
  • College students. College towns with broadband face the unusual challenge that students not only generally leave for the summer, but since there is a big annual turnover in students each year, much student housing is vacant during that time.
  • Snowbirds are tourists who go south for the winter, but they come from somewhere in the north and I have clients with farming communities that see a big outflux during the winter with citizens going south for the winter.
  • While it’s not purely a seasonality issue, communities near to military bases often face similar issue. They experience high churn among customers and requests to put service on hold during deployments.

ISPs face some interesting challenges with seasonality. Consider college towns. They lose significant numbers of customers every summer, and not just from graduating students, but from those who will be moving to a new apartment or home in the fall. The, all of the students come back all at once at the end of August and expect to be immediately connected.

Students create several challenges for an ISP. First, a fiber overbuilder might not be well known and so has to market hard during that period so that new students know there is an alternative. There is also the issue of making many connections in a short period of time. Students are also a billing challenge and it’s not unusual for students to run out of money before the end of a school year. I have one client that offers a special discounted rate for the school year to students who will prepay.

Tourist areas area a a challenge because a lot of customers will strongly resist having to pay for broadband and other triple play products for the months they are gone. And unlike with schools, it’s not untypical in tourism areas for the customers to be gone for more of the year than they are present. This create a financial challenge to an ISP. It’s hard enough to justify the cost of adding a new customer to a fiber network. It’s even harder to justify making that same investment to get only a half year or less of revenue from each seasonal customer.

I’ve seen ISPs deal with this in several different ways, none of which are totally satisfactory. Some ISPs let seasonal customers disconnect and then charge a reconnect fee when they want service again. I know ISPs who charge a small monthly ‘maintenance’ fee that keeps service live in the offseason at a greatly reduced rate. These don’t usually include cable TV to relieve the ISP for paying for programming that nobody is watching. I also know a few ISPs that try to make seasonal customers pay for the whole year.

Communities that lose resident snowbirds are starting to see the same requests to suspend charges for service while residents leave for the winter.

Most communities don’t have a major seasonal issue. But for those that do, it’s important to anticipate this issue when predicting possible costs to build the network versus the revenues that will be used to pay for it. It’s a lot harder to justify building a new network if a significant percentage of the customers don’t want to pay for a whole year of service.

The Migration to an All-IP Network

Last month the FCC recommended that carriers adopt a number of security measures to help block against hacking in the SS7 Signaling System 7). Anybody with telephone network experience is familiar with the SS7 network. It has provided a second communication path that has been used to improve call routing and to implement the various calling features such as caller ID.

Last year it became public that the SS7 network has some serious vulnerabilities. In Germany hackers were able to use the SS7 network to connect to and empty bank accounts. Those specific flaws have been addressed, but security experts look at the old technology and realize that it’s open to attack in numerous ways.

It’s interesting to see the FCC make this recommendation because there was a time when it looked like SS7 would be retired and replaced. I remember reading articles over a decade ago that forecast the pending end of SS7. At that time everybody thought that our legacy telephone network was going to be quickly migrated to all-IP network and that older technologies like SS7 and TDM would retired from the telecom network.

This big push to convert to an IP voice network was referred by the FCC as the IP transition. The original goal of the transition was to replace the nationwide networks that connect voice providers. This nationwide network is referred to as the interconnection network and every telco, CLEC and cable company that is in the voice business is connected to it.

But somewhere along the line AT&T and Verizon high-jacked the IP transition. All of a sudden the transition was talking about converting last-mile TDM networks to digital. Verizon and AT&T want to tear down rural copper and largely replace it with cellular. This was not the intention of the original FCC plans. The agency wanted to require an orderly transition of the interconnection network, not the last-mile customer network. The idea was to design a new network that would better support an all-digital world while also still connecting to older legacy copper networks until they die a natural economic life. As an interesting side note, the same FCC has poured billions into extending the life of copper networks through the CAF II program.

Discussions about upgrading connections between carriers to IP fizzled out. The original FCC vision was to take a few years to study the best path to an all-IP interconnection network and then require telcos to move from the old TDM networks.

I recently had a client who wanted to establish an IP connection with one of the big legacy telcos. I know of some places where this is being done. The telco told my client that they still require interface using TDM, something that surprised my client. This particular big telco was not yet ready to accept IP trunking connections.

I’ve also noticed that the costs for my clients to buy connections into the SS7 network have climbed over the past few years. That’s really odd when you consider that these are old networks and the core technology is decades old. These networks have been fully depreciated for many years and the idea that the cost to use SS7 is climbing is absurd. This harkens back to paying $700 per month for a T1, something that sadly still exists in a few markets.

When the FCC first mentioned the IP transition I would have fully expected that TDM between carriers would have been long gone by now. And with that would have gone SS7. SS7 will still be around in the last-mile network and at the enterprise level since it’s built into the features used by telcos and in the older telephone systems owned by many businesses. The expectation from those articles a decade ago was that SS7 and other TDM-based technologies would slowly fizzle as older products were removed from the market. An IP-based telecom network is far more efficient and cost effective and eventually all telecom will be IP-based.

So I am a bit puzzled about what happened to the IP transition. I’m sure it’s still being talked about by policy-makers at the FCC, but the topic has publicly disappeared. Is this ever going to happen or will the FCC be happy to let the current interconnection network limp along in an IP world?