5G Needs Fiber

I am finally starting to see an acknowledgement by the cellular industry that 5G implementation is going to require fiber – a lot of fiber. For the last year or so the industry press – prompted by misleading press releases from the wireless companies – made it sound like wireless was our future and that there would soon not be any need for building more wires.

As always, when there is talk about 5G there is a need to make sure which 5G we are talking about, because there are two distinct 5G technologies on the horizon. One is high-speed wireless loops send directly to homes and businesses as a replacement for a wired broadband connection. The other is 5G cellular providing bandwidth to our cellphones.

It’s interesting to see the term 5G being used for a wireless microwave connection to a home or business. For the past twenty years this same technology has been referred to as wireless local loop, but in the broadband world the term 5G has marketing cachet. Interestingly, a lot of these high-speed data connections won’t even be using the 5G standards and could just as easily be transmitting the signals using Ethernet or some other transmission protocol. But the marketing folks have declared that everything that uses the millimeter wave spectrum will be deemed 5G, and so it shall be.

These fixed broadband connections are going to require a lot of fiber close-by to customers. The current millimeter radios are capable of deliver speeds up to a gigabit on a point-to-point microwave basis. And this means that every 5G millimeter wave transmitter needs to be fiber fed if there is any desire to offer gigabit-like speeds at the customer end. You can’t use a 1-gigabit wireless backhaul to feed multiple gigabit transmitters, and thus fiber is the only way to get the desired speeds to the end locations.

The amount of fiber needed for this application is going to depend upon the specific way the network is being deployed. Right now the predominant early use for this technology is to use the millimeter wave radios to serve an entire apartment building. That means putting one receiver on the apartment roof and somehow distributing the signal through the building. This kind of configuration requires fiber only to those tall towers or rooftops used to beam a signal to nearby apartment buildings. Most urban areas already have the fiber to tall structures to support this kind of network.

But for the millimeter technology to bring gigabit speeds everywhere it is going to mean bringing fiber much closer to the customer. For example, the original Starry business plan in Boston had customers receiving the wireless signal through a window, and that means having numerous transmitters around a neighborhood so that a given apartment or business can see one of them. This kind of network configuration will require more fiber than the rooftop-only network.

But Google, AT&T and Verizon are all talking about using millimeter wave radios to bring broadband directly into homes. That kind of network is going to require even more fiber since a transmitter is going to need a clear shot near to street-level to see a given home. I look around my own downtown neighborhood and can see that one or two transmitters would only reach a fraction of homes and that it would take a pole-mounted transmitter in front of homes to do what these companies are promising. And those transmitters on poles are going to need to be fiber-fed if they want to deliver gigabit broadband.

Verizon seems to understand this and they have recently talked about needing a ‘fiber-rich’ environment to deploy 5G. The company has committed to building a lot of fiber to support this coming business plan.

But, as always, there is a flip side to this. These companies are only going to deploy these fast wireless loops in neighborhoods that already have fiber or in places where it makes economic sense to build it. And this is going to mean cherry-picking – the same as the big ISPs do today. They are not going to build the fiber in neighborhoods where they don’t foresee enough demand for the wireless broadband. They won’t build in neighborhoods where the fiber construction costs are too high. One only has to look at the hodgepodge Verizon FiOS fiber network to see what this is going to look like. There will be homes and businesses offered the new fast wireless loops while a block or two away there will be no use of the technology. Verizon has already created fiber haves and have-nots due to the way they built FiOS and 5G wireless loops are going to follow the same pattern.

I think the big ISPs have convinced politicians that they will be solving all future broadband problems with 5G, just as they made similar promises in the past with other broadband technologies. But let’s face it – money talks and these ISPs are only going to deploy 5G / fiber networks where they can make their desired returns.

And that means no 5G in poorer neighborhoods. It might mean little or limited 5G in neighborhoods with terrain or other similar issues. And it certainly means no 5G in rural America because the cost to build a 5G network is basically the same as building a landline fiber network – it’s not going to happen, at least not by the big ISPs.

Cable Company Gigabit

We are starting to get a look at what a gigabit product from the cable companies might look like. Late last year Comcast rolled out a gigabit product in parts of Atlanta, Detroit, Nashville and Chattanooga. They are now rolling implementation across the country and the company says that gigabit speeds will be available in all markets by 2018.

Comcast has elected to make the upgrades by implementing DOCSIS 3.1 technology on their networks. This technology allows the network to bond together numerous empty channels on the cable system to be used for broadband.

In markets where there is competition with Google Fiber or another fiber provider, the Comcast product is being sold at an introductory price of $70 per month with a 3-year contract. Month-to-month pricing without the contract is $140 per month. In reading group discussion websites where Comcast customers chat it sounds like there are already many markets where the $70 contract price is not available. I have read some customers say they have gotten prices at $110 to $120 per month, so perhaps the company is flexible with those willing to wade through the customer service maze and willing to sign a term contract.  

The current Comcast product delivers up to 1 Gbps download and 35 Mbps upload. You can expect Comcast to make future upgrades that will improve the upload speeds – but that upgrade is not included in this first generation of DOCSIS 3.1 technology. For now the upload speeds will be a barrier to any application that needs fast upload speeds.

The new technology also requires new hardware, meaning a new cable modem and a new WiFi router capable of handling the faster data speeds. So expect the price to be bumped higher to rent the hardware.

It’s hard to imagine that many customers are going to pony up more than $150 per month to get a gigabit connection and modem. When Google Fiber first introduced $70 gigabit to Kansas City (and when that was their only product), there were reports that there were neighborhoods where as many as 30% of the households subscribed to the gigabit product. But Google has a true $70 price tag and didn’t layer on fees for a modem or any other fees, like Comcast is surely going to do. It’s hard to imagine many customers agreeing to a 3-year contract for the gigabit product in competitive markets if they can buy it from somebody else without the contract. But perhaps Comcast will offer bundling incentives to pull the real cost under $70.

But we know when there are more choices that most customers will opt for the lowest-price product that they think is adequate for their needs. For example, when Google Fiber came to Atlanta they also had a 100 Mbps product for $50 per month and it’s likely that most customers chose that product rather than paying extra for the gigabit.

The Comcast pricing might reflect that Comcast doesn’t want to implement too many high-bandwidth customers at the same time. While DOCSIS 3.1 increases the size of the data pipes available to customers, it doesn’t make any significant improvements in the last mile network. To the extent that high-bandwidth customers use a lot more data, too many gigabit customers in a cable company node could degrade service for everybody else. But it’s likely that most gigabit customers don’t use a lot more data than 100 Mbps subscribers – they just get things done more quickly. But I am sure that Comcast still has worries about having too many high-bandwidth customers in the network.

Comcast and other cable companies are seeing more competition. For example, CenturyLink is selling $85 gigabit service in many western cities and passed about 1 million homes with fiber last year. Verizon FiOS just increased their data speeds in their fiber markets – not quite to a gigabit yet, but at ranges up to half a gigabit. But in the vast majority of the country the cable companies are not going to have significant competition with any foreseeable future.

FCC Commissioner Michael O’Reilly said a few weeks ago that ultrafast broadband is a marketing gimmick. While he was even referring to 100 Mbps broadband as a gimmick, it’s hard to not agree with him that a residential gigabit bandwidth product priced above $150 per month is more gimmick than anything else. There can’t be that many households in any market willing to pay that much extra just for the prestige of saying they have a gigabit.

But over time the prices will drop and the demand for bandwidth will grow and a decade from now there will be a significant portion of the market clamoring for an affordable gigabit product. Remember that we’ve seen this same thing happen a number of times in the past. I remember the big deal the cable companies made when they first increased speeds to 15 Mbps. The funny thing is that the market has a way of filling faster data pipes, and the day will come sooner than we expect where many households will legitimately want and need gigabit data pipes.     

Who Will Win the Telecom Battle?

facebookNow that Google has pulled back with expansion of Google Fiber it’s easy to see that the cable companies and telcos think they have won the broadband war. But I think if you look a little closer this might not really be the case.

Tech companies like Google, Facebook and Amazon are still focused on making sure that people have enough bandwidth to take advantage of the many products these giant companies offer or plan to offer in the future. And all three companies are growing in importance as content providers.

Consider first the strength of these companies as content providers. Google owns YouTube which is becoming the most important video destination for the younger generation – and those kids are growing up. We’ve seen young millennial households largely reject traditional cable TV offerings. While Amazon Prime is not nearly as big as Netflix it is a strong second and is continuing to grow. Amazon is also reported to be pouring big money into producing original content for its platform. Facebook is on a trajectory to become the preferred source of news and information. And their Facebook Live is also quickly becoming a huge content platform.

But content isn’t everything. Consider that these companies have amassed an enormous private fiber network. Google doesn’t talk about it’s network, but way back in 2013 it was reported that Google had assembled a network consisting of 100,000 miles of dark fiber. We also don’t know the size of the networks, but both Amazon and Facebook have also built large private networks. We know that Google and Facebook have partnered to build a massive undersea fiber to China and are looking at other undersea fiber routes. Amazon has built a huge network to support its cloud services business. It would not be surprising if these companies have already together amassed a larger fiber network than the telcos and cable companies. If they are not bigger yet, they are on a trajectory to get there soon. With these networks the tech companies could hurt the big ISPs where it most hurts – by taking a huge bite out of their special access and transport businesses.

These companies are also not done with the ISP business. Google Fiber has retracted from expanding FTTH networks for now, but they acquired Webpass and are looking to expand as an ISP using wireless last mile. And we saw in Huntsville that Google is not afraid to use somebody else’s fiber network – something we have never seen any of the telcos or cable companies consider. It would not be surprising to see Google make deals with other private networks to expand its ISP business to avoid spending the upfront capital. But perhaps Google’s biggest foray into providing data services is Google Fi, their service that provides unlimited cellular data using WiFi first rather than cellular. It’s been rumored that Google is looking for partnerships to expand WiFi access in many markets. And it’s been reported that Amazon is strongly considering becoming an ISP. I’ve not heard any details about how they might do this, but the company has shown the ability to succeed in everything it’s tackled – so it’s an intriguing possibility.

It’s a gigantic task to take on companies like AT&T and Comcast head on. I think Google Fiber learned this the hard way. But at the end of the day content is still king. As these companies continue to grow in influence as content providers they present a real challenge to traditional programmers. But they also are a growing threat to the big ISPs. If these tech companies decide that their best strategy is to directly deliver their content to subscribers they have a big enough marketing position to pull along a huge number of customers. It’s clear that consumers like these tech companies far more than they like the big ISPs, and in the end the accumulated animus with customers might be their undoing.

This kind of industry shift won’t happen overnight. But it’s already quietly going on behind the scenes. We may not be as far away as you might imagine when these companies provide more content than the traditional programmers and also carry more bandwidth on their own networks than the big ISPs. From my perspective that looks a lot like winning the battle.

The Google Fiber Rumor

googlefibertruckMy wife went into a Best Buy this week and when she was talking with the salespeople there they found out that we are fiber consultants. The first thing they wanted to know from her was if the rumors were true that Google Fiber was coming to our town early next year. They firmly believed this was the case and they were really excited about the possibility.

I live in a small town halfway between Fort Myers and Sarasota in Florida. Google Fiber had been in talks with Tampa which is about two hours north of here. But I can’t imagine that our community is on anybody’s radar to build fiber. I live in a snowbird community, meaning we are where northerners come to get away from winter. For about six months a year this is a ghost town. Most of the houses in my neighborhood are dark for half the year. It’s hard to think that anybody would build fiber in an area where half the potential customers are gone half the year, and where a lot of the customers are elderly and not particularly interested in fiber broadband speeds.

But I find it intriguing that there is a strong rumor about getting fiber in this area. I am sure that this rumor started from folks in Tampa since Google Fiber has been in talks with the city for the last few years. I guess people assume that if they come there that they will come to the whole region.

But I think this rumor speaks to how much fiber is wanted. These salespeople were young techie guys and would be expected to part of the fiber demographic. There are a number of people in every community that would love to buy fiber and who would sign up for it as soon as it is available. But the big question that still has to be answered is if there are enough people willing to pay a premium price for fiber to even create a business plan.

It certainly doesn’t seem like Google Fiber has been going gangbusters. They don’t release customer numbers, but the general buzz in the industry is that they haven’t picked up as many customers as they had hoped for. And that, possibly more than any other factor has probably led to them taking a ‘pause’ from new fiber expansion.

Building fiber networks is expensive. I create fiber business plans and have studied every size market possible from farmlands to NFL cities. The one common feature of every fiber business is that there is some minimum customer penetration rate needed just to break even – with breaking even meaning being able to cover all of the costs of operations including capital.

When municipalities and cooperatives look to build fiber they want to make sure that they do a little better than breakeven. They will obviously be pleased if a fiber business does even better and spins off cash, but they worry more about having to subsidize a fiber business. And it is this perspective that makes it seem easier in some ways to build fiber to small towns and even to farms than to big cities. For these builders, breakeven is good enough.

But Google Fiber or CenturyLink or any of the other commercial fiber overbuilders want to do much better than breakeven. These big companies are beholden to shareholders who expect a significant increase quarter-over-quarter in profits and returns. And the need for significant profits means they have to get a lot of customers to meet their financial goals.

Frankly, the desire for high profits and high capital costs don’t jive very well. Fiber is infrastructure and it’s a real challenge to get high returns out of any kind of infrastructure. Other utilities like electric or water are a lot more realistic and hope to make a modest, but steady profit and make it for a long time. If fiber overbuilders were being realistic they would have the same perspective – but tech companies are not utilities and their stockholders are not going to be patient with slow steady returns.

Google Fiber is now on hold and will consider expanding again if they can find a way to use wireless technology to build the last connection to customers. Assuming that such a technology lowers costs (not a given), then this would reset the bar and lower the breakeven needed – and also make it easier to make profits. But even then it’s going to cost a huge amount of money to build broadband in a city. Wireless networks like the ones Google is envisioning still require a lot of fiber, and that means they will still be an infrastructure-heavy business.

I think there is a good possibility that Google Fiber will never resume their expansion plans using an infrastructure model. This is going to disappoint millions who have been hoping for fiber like the guys at Best Buy. Google Fiber might still consider opportunities like Huntsville, Alabama where the city paid for the fiber network. But my guess is that the Google parent company doesn’t have a real appetite for infrastructure returns, and that is why Google Fiber is on hold.

The Broadband Battle in Nashville

PoleThere is a regulatory battle going on in Nashville that is the poster child for the difficulty of building new fiber networks in urban areas. The battle involves Google Fiber, AT&T, Comcast, and the Metro Council and Mayor of Nashville, all fighting over access to poles.

Google Fiber wants to come to Nashville and needs access to existing poles. About 80% of the current poles are owned by the city-owned Nashville Electric Service with the other 20% belonging to AT&T.

The Metro Council recently enacted a new ordinance called the One Touch Make Ready (OTMR) law. This law would speed up the process called make-ready, which is the process for making room for a new wire to be hung on poles. Under the new rules, Google Fiber or other new pole attachers would be free to move wires belonging to another utility to make room for their new wires. And the new attacher must pay for the needed changes, at whatever rate the other wire owners bill them.

The FCC took a stab at this problem a few years ago and they allow a new attacher to add their cables to a pole without approval if the paperwork process takes too long. But those rules only apply to poles that don’t need any make-ready work – and in an urban area most poles need some amount of make-ready work to make room for a new wire.

Current make-ready rules require that the owner of each existing wire be notified so that they can move their own wire, as needed. As you might imagine, this means an overbuilder must issue  a separate request for multiple wire owners for each individual pole that needs to be modified, including detailed instruction the changes that must be made. Other pole owners are giving an opportunity to disagree with the recommended changes. And this whole paperwork process can’t even begin until the pole owner has first inspected each pole and decided on a make-ready solution.

As you can easily imagine, since many of the other companies with wires on poles don’t want competition from Google Fiber or any other new competitor, they do everything legally possible to delay this process.

What I find ironic about this process is that the current wire owners can drag their feet even if their own existing wires are in violation of code. The various industry codes dictate a specified distance between different kinds of wires in order to make it safe for a technician to work on the wires, particularly during bad weather. I’ve found that most poles in an urban area have at least one existing code violation.

It’s also ironic that the cable company can drag their feet in this process. I’ve heard numerous stories about how the installers for the original cable networks often went rogue and installed their wires without getting formal permission from the pole owners. At that time the telcos and cable companies were not competitors and so nobody made a big fuss about this.

It’s been reported that one City Council member tried to stop the new law from going into effect by introducing an alternate proposal – which supposedly was written by AT&T. That alternative law gave the incumbents 45 days to make changes, but also limited the fast pole response to 125 poles per week. In a City the size of Nashville there are tens of thousands, and possibly even more than 100,000 poles that might need to be changed – and so that limit basically means that it would take many years, even possibly decades for a new fiber provider to build a city-wide network.

The new One Touch rule would allow Google Fiber or others to make the necessary changes to poles if the incumbent wire owners don’t act quickly enough to move their wires. AT&T has already sued the City to block the new ordinance. They argue that the City has no authority to order this for the AT&T-owned poles. They also argue that this change will disrupt their service and put their customers out of business. The lawsuit is, of course, another delaying tactic, even should the City prevail.

There is little way to predict how the courts might decide on this. It’s a messy topic involving a complex set of existing and technical industry practices. Both sides have some valid concerns and good arguments to make to a court. Both sides also have access to the best lawyers and it will be an interesting court fight. But perhaps the most important thing to consider is that the existing rules can mean that it’s not economically feasible to build a new fiber network in a City – if so then something needs to change.

Are Cable Companies Winning the Speed War?

Polk County SignThe latest news about Google Fiber slowing on their metropolitan fiber builds got me to wondering if perhaps the cable companies are starting to win the speed wars. Are we getting to a time when a fiber overbuilder is going to have trouble competing with them?

After many years of being stingy with bandwidth the cable companies have now largely adopted the opposite strategy and increase household speeds over time without raising prices. I can remember quotes from several big cable companies a few years ago where the cable companies claimed they were giving households all the speed that they need. And this was back at a time when they were experiencing a significant amount of network congestion during the peak evening hours. But my reading of many different customer reviews tells me that the cable companies have largely solved the congestion issue.

This is not to say that there are not places where the cable networks are still not up to snuff, but compared to ten years ago, a lot more cable networks seem to be delivering the speeds that customers want. Of course, there are still plenty of small town where the rural cable networks are not up to snuff, but metropolitan areas seem to have improved a lot.

The FCC reported in their 2015 Measuring Broadband America Fixed Report that Comcast customers got between 109% and 119% of the speeds that they paid for. I know personally that my speed tests often shows at least 5 Mbps better performance than what I am paying for with Comcast.

But the question that has been nagging me is if a new fiber provider can really thrive in a metropolitan area? Can they get enough customers to be profitable? It’s been widely reported that Google and other fiber overbuilders need at least a 30% market share to succeed, and that’s a tall order in a city where everybody already has broadband.

People need a compelling reason to change providers, because it’s a process that nobody enjoys. It means staying at home to meet an installer, returning settop boxes and modems, and worrying about the billing transition.

I have some anecdotal evidence about the way at least one group of people buy broadband. I’ve been a member of several active Maryland sports message boards for over two decades and broadband is a periodic topic of conversation since sports fans these days care about watching sports on the Internet. The majority of the people on these boards happen to live in neighborhoods that have both Verizon FiOS and a cable company – mostly Comcast, but sometimes somebody else. These are folks who have had the choice between fiber and coaxial cable networks for a long time.

What I’ve seen over the years is that there are a few people that are big fans of either the cable company or Verizon. But the vast majority don’t seem to really care as long as the broadband works well enough to watch their sports and the other things their families do on the Internet. Probably half of the people on these boards have moved back and forth between the providers during the last decade. I’ve seen evidence that content matters more than speeds when over the years there were occasions when one provider or the other did not broadcast a Maryland football or basketball game. At least among this one large group of I don’t see any major affinity for fiber over coaxial cable networks. These folks just want something that will work.

A new fiber provider has to provide a compelling reason for people to change. Certainly having lower prices could be a compelling reason, but most metropolitan fiber providers are not much cheaper than the cable company (and sometimes they are more expensive). And while a fiber provider might offer gigabit speeds, I wonder if that is enough to get people to change if they are happy with the speeds they have had for the last few years?

I’ve always said that there is some percentage of any community that will change to a new provider because they dislike the current provider for some reason. But those are rarely enough customers to justify a business plan, and so being successful with fiber also means persuading customers that are not unhappy to change. And perhaps, as Google has found out, that is not as easy as fiber proponents have assumed. Certainly, the cable company tactic of greatly improving the performance of their data products is making it harder and harder for a new overbuilder to thrive.

Technology Hype

coax cablesI find it annoying when I read short articles that proclaim that a new technology that can deliver faster data speeds is right around the corner. This has most recently happened with 5G cellular, but in the past there have been spates of such articles talking about cable modem speeds with DOCSIS 3.1, and faster copper speeds with G.Fast.

It’s always easy to understand where such articles come from. Some vendor or large ISP will announce a technical breakthrough in a lab, and then soon thereafter there are numerous articles written by non-technical people proclaiming that we will soon be seeing blazing speeds at our homes or on our cell phones.

But these articles are usually premature, and sadly there are real-life consequences to this kind of lazy press. Politicians and policy makers see these articles and accept them as gospel and make decisions based upon these misleading articles. It then is up to people like me to come behind and explain to them why the public claims are not true.

This is happening right now with talk about blazingly fast millimeter wave radios to replace fiber loops. Even if this technology were ready for market tomorrow (which it won’t be), like any technology it will have limits. There are places where wireless loops might be a great solution but other places where it may never be financially or technically feasible. Yet a whole lot of the country now believes that our future broadband is dependent upon gigabit wireless, and this is quashing plans for building fiber networks.

One recent set of these kinds of articles proclaimed that DOCSIS 3.1 is going to bring everybody gigabit speeds over cable company networks. And there is some truth to that, but the nuances are never explained. There are a lot of changes needed in a cable network to bring gigabit speeds to all of their customers. What is really happening in the first upgrade is that cable networks will have limited gigabit capabilities. The companies will be able to deliver gigabit speeds to perhaps hundreds of people in a market. Their networks would have problems if they tried to deliver it to thousands, and their networks would crash if they tried to give fast speeds to everybody.

Consider the list of issues that must be overcome to use a cable network to bring gigabit speeds to the masses:

  • First a cable company has to free up enough empty channels to make room for the gigabit data channels. For many cable system this will require upgrading the overall bandwidth of the cable network, and this can be very expensive. In the most extreme cases it can mean replacing all of the network amplifiers and power taps and even sometimes replacing some of the coaxial cable.
  • Cable bandwidth is shared by all of the customers in a neighborhood (called a node). If a cable company only sells a few gigabit products in a given node there will be some small degradation of bandwidth performance for everybody else. But if enough customers want to buy a gigabit the cable company will be forced to ‘split’ the nodes so that there are fewer homes sharing the bandwidth. Cable companies today have nodes of 200 – 300 customers, compared to fiber network nodes that generally range between 16 and 32 customers per node. A cable company has to build more fiber and install more electronics to get nodes as small as fiber systems.
  • Every network has chokepoints, or places where only a set amount of bandwidth can be handled at the same time. There are several of these chokepoints in a cable network – at the node, on the data pipe serving the node, at several data concentration points within the headend, and with the pipe to the outside Internet. You can’t upgrade speeds without upgrading these chokepoints, and that can be expensive.
  • At some point if enough customers want fast speeds the network would need to be fundamentally reconfigured to a new technology. This might mean converting the whole headend and electronics to IPTV. It might mean moving the CMTS (the device that talks to the data at each node) into the field, similar to a fiber network. And it would mean building a lot more fiber, to the point where there would almost be as much fiber as in a fiber-to-the-premise network.

There is always some truth in these technological pronouncements. But these articles are way off base when they then imply that a given breakthrough is the end-all solution to broadband. Yes, cable systems can be faster now, which is great. But DOCSIS 3.1 does not make a cable network equivalent to a Google Fiber network that can already deliver a gigabit to everybody. And yes, there is great promise in wireless local loops. But even after all of the issues with deploying wireless in a real-life environment are solved, the technology is only going to work where there is fiber fairly close to customers and when a number of other factors are just right. These kind of nuances matter and I really wish that non-techie writers would stop telling us that the solution to all of our broadband speed problems is right around the corner. Because it’s not.

Thoughts on the Google Fiber News

800px-OSU_Bucket_TruckIt was recently reported that Larry Page, the CEO of Alphabet told Google Fiber to cut their staff in half from 1,000 to 500 and to also cut the cost of building new fiber. That certainly is going to slow or even stop Google Fiber’s expansion plans. There are a few lesson to be learned from that announcement for all fiber start-ups.

It’s expensive to build in cities. Building new fiber networks from scratch is expensive. It’s very doubtful that Google has found any magic that would let them build fiber networks for much less than anybody else. Verizon always said they stopped expanding FiOS due to the construction costs. Yet Verizon built most of their network in places where the construction was relatively inexpensive. They lashed fiber to existing phones cables where there was poles or installed underground fiber where there was existing conduit. But Google Fiber didn’t have either of those advantages and so they were spending a lot more than Verizon.

The company is now looking at wireless technologies to cut construction costs. Unless the company has some really good millimeter wave technology ready to roll out soon, this could cause a big delay on expansion. Lots of companies are thinking about wireless loops, but we know from past experience that there is a big transition with any wireless technology when moving from a lab into the real world.

You have to sell what customers want to buy. The news articles I saw say that Google Fiber had 200,000 customers at the end of 2014. One would expect that they have up to twice that by now. But considering their initial goal to have millions of customers those numbers are low.

My guess is that the company has had trouble convincing enough households to buy their $70 broadband. I would buy that in a flash, but I expect a lot of homes found the product to be out of reach for their budgets. I see that in Atlanta that Google Fiber has introduced a 100 Mbps broadband connection for $50, and that has to open up a lot more sales opportunities for them.

This highlights one of the biggest challenges for a fiber overbuilder. The biggest expense by far of getting into the business is to build the fiber network. Once you’ve sunk that money the goal is to get as many paying customers as possible onto the network, and that means having a wide enough array of products (which are still profitable) to generate revenue.

Delays are to be expected. There has been news over the last few years of delays in Google Fiber construction, much of it having to do with access to poles. Google Fiber learned the same lesson that had deviled many of my clients: when you need to get poles from the company you will competing with they are going to use every legal trick in the book to slow down the process. I’ve written about pole horror stories before in the blog.

Labor costs can be your enemy. The announcement said that Google Fiber had to slash their work force from 1,000 to 500. They face the same dilemma as other broadband start-ups – there is a lot of work to do in launching new markets compared to the number of people needed to operate them once they are mature.

For many years I have used a general rule of thumb for the number of employees that a telecom company ought to have. For example, a medium sized carrier with 20,000 to 50,000 customers ought to have roughly 1 employee for every 350 customers. As companies get larger and more efficient that ratio increases and companies up to about 250,000 ought to have around 1 employee for every 400 customers. Bigger companies should get even more efficient, but seems to be a natural cap at some size reached by very large companies.

It’s impossible to judge if Google Fiber has too many employees without knowing how many customers they have today. I also have no idea if the company uses contractors in addition to full-time employees because those would count in this calculation. But if the company has 400,000 customers today and doesn’t use contractors then the 1,000 employee count wouldn’t be too bad. But if they have fewer customers and also use contractors they are currently overstaffed.

One issue that can justify a larger staff than normal is the need to have a staff that is working ahead on future markets. Those employees would not be counted when looking at the size of the needed staff.

Also, the ratios I cited are more than a decade old and if I put some thought to them I would probably revise them higher. There are numerous improvements in customer service tools and other efficiencies that can reduce the number of times that a customer has to talk to or see a live employee. With the upcoming AI revolution one would imagine that a lot of customer service is going to be handled by bots and lower labor costs.

The bottom line of all of these issues is that Google Fiber hit the same wall hit by every other overbuilder. There comes a time when you have to show profitability or the money dries up. And it’s really hard to show profitability when you are still growing rapidly.

What’s the Right Price for a Gigabit?

Speed_Street_SignI often get asked how to price gigabit service by clients that are rolling it out for the first time. For an ISP already in the broadband business, layering in a super-fast Internet product on top of an existing product line can be a real challenge.

Google certainly lowered the bar for the whole industry when they priced a gigabit at $70. And that is the real price since Google doesn’t charge extra for the modem. I think the Google announcement recalibrated the public’s expectations and anybody else that offers a gigabit product is going to be compared to that price.

There are a few other large companies marketing a gigabit product in multiple markets. CenturyLink has a gigabit connection for $79.95 per month. But it’s hard to know if that is really the price since it is bundled with CenturyLink’s Prism TV. The cheapest Prism TV product offered on the web costs $39.99 per month and includes 150 channels of programming and also comes with an additional settop box fee of $9.99 per month – the highest box fee I’ve seen. I don’t know exactly what kind of bundle discount is available, but on the web I’ve seen customers claiming that the cheapest price for the gigabit bundle is around $125 per month. That’s a far cry from Google’s straight $70. And for customers who want to use a gigabit to cut the cord a forced bundles feel a bit like blackmail.

Verizon FiOS has not yet given in to the pressure to offer a gigabit product. In looking at their web site their fastest product is still a symmetrical 500 Mbps connection at $270 per month plus an added fee for a modem, and with a required 2-year commitment. A 1-year commitment is $280 per month.

Comcast will soon offer a gigabit in more markets than anybody else. In Atlanta where Comcast is competing against Google Fiber a gigabit is $70 per month with a 3-year contract, including an early termination fee (meaning that if you leave you pay for the remaining months). This package also requires an additional modem charge. Without a contract the price for the gigabit is $140. It’s unclear if Comcast is offering the same lower-price deal in other markets with newly upgraded DOCSIS 3.1 like Chicago. The word on the Internet is that customers are unable to sign-up for the lower-price option in these markets, but the company says it’s available. I’m sure the availability  will soon become clear.

One thing that happens to any company that offers a gigabit is that the prices for slower speeds are slashed. If a gigabit is $70 – $80 then slower products must become correspondingly less expensive. Google offers a 100 Mbps product for $50 and each of the other companies listed above has a range of slower bandwidth products.

The first question I always ask an ISP is if they are offering gigabit speed for the public relations value or they really want to sell a lot of it. There are plenty of ISPs that have gone for the first option and have priced a gigabit north of $100 per month.  But for somebody that hopes to sell the product, the dilemma is that they know that the majority of their customers will buy the least expensive product that provides a comfortable speed. The rule of thumb in the industry is that, in most markets, at least 80% of customers will buy the low or moderate priced options. But if the choice is between a gigabit product and a 100 Mbps product, the percentage buying the slower product is likely to be a lot higher.

The issue that small ISPs face when recalibrating their speeds is that they end up increasing speeds for most existing customers. If they migrate from a scale today where 50 Mbps or 100 Mbps is the fastest product up to a new scale topped by a gigabit, then they have to increase speeds across the board to accommodate the new gigabit product.

This is a hard mental block to get over for many small ISPs. If a company offers a range today of products from 6 Mbps to 75 Mbps it’s mentally a challenge to reset their slowest speed to 50 Mbps or faster. They often tell me that in doing so they feels like they are giving away something for free. If a company has been an ISP since the dial-up days they often have a number of customers that have been grandfathered with slow, but inexpensive broadband. It’s a real dilemma when rebalancing speeds and rates to know what to do with households that are happy with a very cheap connection at 1 Mbps or 2 Mbps product.

For the last ten years I have advised clients to raise speeds. ISPs that have raised speeds tell me that they generally only see a tiny bump in extra traffic volume after doing so. And I’ve always seen that customers appreciate getting faster speeds for the same price. Since it doesn’t cost much to raise speeds it’s one of the cheapest forms of marketing you can do, and it’s something positive that customers will remember.

I think most ISPs realize that the kick-up to gigabit speeds is going to be a change that lasts for a long time. There are not many customers in a residential market that need or can use gigabit speeds. What Google did was to leap many times over the natural evolution of speeds in the market, and I think this is what makes my clients uneasy. They were on a path to have a structure more like Verizon with a dozen products between slow and fast. But the market push for gigabit speeds has reduced the number of options they are able to offer.

Getting Access to Conduit

innerduraFuturePathGroupThere is an interesting case at the California Public Utilities Commission where Webpass is fighting with AT&T over access to conduit. You may have seen that Webpass was just recently bought by Google Fiber and I would think this case will be carried forward by Google.

The right for competitive providers to get access to conduit comes from the Telecommunications Act of 1996. In that Act, Congress directed that competitive telecom providers must be provided access to poles, ducts, conduits, and rights-of-way by utilities. A utility is defined as any company, except for electric cooperatives and municipalities, which owns any of those facilities that are used in whole or in part for communications by wire. Under this definition telcos, cable companies, commercial electric companies, gas companies, and others are required by law to make spare conduit available to others.

If a utility allows even one pole or piece of conduit to be used for communications, including for its own internal purposes, then the whole system must be made available to competitors at fair prices and conditions. About half of the states have passed specific rules governing those conditions while states without specific rules revert to the FCC rules.

Webpass tried to get access to AT&T conduits in California and ran into a number of road blocks. It seems like there are a few situations where AT&T has provided conduit to Webpass, but AT&T denied the majority of the requests for access.

This is not unusual. Over the years I have had several clients try to get access to AT&T and Verizon conduit and none of them were successful. AT&T, Verizon, and the other large telcos generally have concocted internal policies that make it nearly impossible to get access to conduit. When a competitor faces that kind of intransigence their only alternative is to take the conduit owner to court or arbitration – and small carriers generally don’t have the resources for this kind of protracted legal fight.

But even fighting the telcos is no guarantee of success because the FCC rules provide AT&T with several reasons to deny access. A utility can deny access on the basis of safety, reliability or operational concerns. So even when a conduit owner is ordered to provide access after invoking one of these reasons, they can just invoke one of the other exceptions and begin the whole fight again. It takes a determined competitor to fight through such a wall of denial.

Trying to get conduit reminds me of the battles many of my clients fought in trying to get access to dark fiber fifteen years ago. I remember that AT&T and Verizon kept changing the rules of the dark fiber request process so often that a competitor had a difficult time even formulating a valid request for dark fiber. Even when Commissions ordered the telcos to comply with dark fiber requests, the telcos usually found another reason to deny the requests.

This is a shame because getting access to conduits might be one of best ways possible to promote real competition. AT&T and Verizon both claim to have many hundreds of thousands of miles of fiber, much of it in conduit. I am sure there are many cases where older conduit is full. But newer conduits contain multiple empty tubes and one would have to think that there is a huge inventory of empty conduits in the telco networks. The same is true for the cable companies and the large electric companies, and I can’t recall any small carriers who has ever gotten access to any of this conduit. I think some of the large carriers like Level3 or XO probably have gotten some access to conduit, but I would imagine even they probably had to fight very hard to get it.

I remember talking to a colleague the day that we first read the Telecommunications Act of 1996 that ordered the telcos to make conduit available to competitors. We understood immediately that the telcos would adopt a strategy of denying such access – and they have steadfastly said no to conduit requests over the years. I am glad to see Webpass renewing this old fight and it will be interesting to see if they can succeed where others have failed.