Can Big ISPs Resist Data Caps?

MagneticMapI think we can expect data caps to continue to be in the news. Comcast was getting a lot of negative press on data caps at the beginning of the year and had generated tens of thousands of complaints at the FCC from their 300 GB (gigabit) monthly data cap. They relieved that pressure by unilaterally raising all of the data caps to 1 TB (terabit) per month. But Comcast has now been quietly implementing the terabit cap across the country and recently activated it in the Chicago region.

In May of this year, AT&T U-verse revised a few of their data caps upward, but at the same time began seriously enforcing them for the first time. Until recently, most AT&T data customers that exceeded the caps paid no extra fees. The AT&T U-verse data caps are much smaller than the new Comcast cap. For traditional single-copper DSL customers the data caps is 150 GB per month. For U-verse speeds up to 6 Mbps the cap is now 300 GB per month. For speeds between 12 Mbps and 75 Mbps the cap is 600 GB, while customers with speeds at 100 Mbps or faster now have the same 1 TB monthly cap as Comcast. AT&T has a kicker, though, and any customer can buy unlimited usage for an additional $30 per month.

The large ISPs, in general, are under a lot of pressure to maintain earnings. They have all profited greatly by almost two decades of continuous rapid growth in broadband customers. But that growth is largely coming to an end. A few of the cable companies are still seeing significant broadband growth, but this is coming mostly from capturing the remaining customers from big telco DSL.

At the beginning of this year, the Leichtman Research Group reported that 81% of all American homes now have a broadband connection. When you add up rural homes that can’t get broadband and those elsewhere that can’t afford full-price broadband, there are not room for much more growth. Even if a lot of low-income households get broadband through the Lifeline Fund subsidies, those customers will be at low rates and won’t do a lot to the bottom line at the big ISPs.

Meanwhile, the large ISPs are seeing an erosion of cable revenues. While cord cutting is small, it is real and the cable industry as a whole is now slowly losing customers. Probably more significant to their profits is cord-shaving; customers cut back on the cable packages to save money (and because they have alternatives to the big cable packages). Even if cable wasn’t starting to bleed customers, the margins continue to shrink due to the huge increases in programming costs. Even high margin revenue streams like settop boxes are under fire at the FCC.

When I look out five years from now it’s obvious that the ISPs will somehow have to milk more profit out of broadband. There are only two ways to do that – increase rates or find backdoor ways like data caps to get more money from broadband customers.

It’s not hard to understand why the large ISPs fought net neutrality so hard. By putting broadband under Title II regulation the ruling has already started to impact their bottom line. I think Comcast raised their data cap to stop the FCC from investigating data caps. The proposed FCC rules on privacy will largely strip the ISPs of the ever-growing revenues from advertising and big data sales. And it’s certainly possible in the future that the FCC could use the Title II rules to hold down residential data rates if they climb too high.

It’s got to be a bit hard to be a big ISP right now. They look at envy at the big revenues that others are making. The cellular companies are making a killing with their stingy data caps. Companies like Google and Facebook are making huge amounts of money by using customer data for personalized advertising. Meanwhile, the ISPs live in a world where, if they aren’t careful, they will eventually become nothing more than the big dumb pipe provider – the one future they fear the most.

Comcast, and perhaps the new Charter, are large enough to find other sources of revenue. Comcast is now pursuing a cellular product and has done fairly well selling security and smart home products. Comcast also makes a lot of money as a content provider, boosted now by buying DreamWorks. But any ISP smaller than these two companies is going to have a nearly impossible time if they want to continue to match the growth in bottom line they have enjoyed for the last decade.

Shrinking DSL Competition

turtle_backFor a number of years Verizon has been trying to get rid of DSL customers. Verizon just recently increased the price of its older DSL by $7 in an attempt to drive more customers to FiOS, Verizon wireless, or the cable company.

Unlike the other large telcos Verizon never upgraded DSL to the paired copper wire technology used by AT&T U-verse. In that technology, AT&T and other telcos have bonded together two copper wires and also used a later variety of DSL that, together, can increase DSL speeds to as much as 50 Mbps on perfect copper, but even to 25 Mbps on poor copper. Instead, Verizon put all of their investment into FiOS fiber and most of their DSL is from the very early 2000s. The older DSL that is still operating has speeds of up to 3 Mbps, with ‘newer’ DSL with speeds up to about 7 Mbps.

These are the speeds in urban areas and Verizon customers who live outside of towns get far slower speeds, often reported at near-dial-up slowness. And many of these rural customers have to worry about Verizon wanting to tear down their copper lines, leaving them with no wireline broadband alternative. Verizon is the only large telco that largely rejected the FCC’s offer for taking Connect America Funds to upgrade its rural DSL. Verizon has sold large chunks of its rural market to Frontier and the company has made it clear to the FCC that they would like to walk away from the rest.

If you’ve never read the customer reviews at DSL Reports it’s worth a look. This is a site where customers have been posting stories of problems with broadband for years – everything from lack of speed, poor customer service, slow repairs and pricing. For anyone that happens to have a fast broadband connection it’s an eye-opener to hear from homes that do not.

The FCC tries to paint the picture that there are many markets in the US that have at least two competitors. But when one of the two competitors is a telco trying to edge its way out of the DSL business it’s hard not to argue that a lot of the country really has become a cable monopoly for broadband. The households that stubbornly stick with DSL seem to be those that are willing to accept slow speeds for a lower price. But Verizon seems to want these customers to move on to some other alternative.

Even where the telcos are trying to make DSL competitive it’s a losing battle. AT&T put a lot of money into upgrading and selling its U-Verse DSL product. This was their alternative to building fiber and AT&T thought they could get a few more decades out of their aging copper.

But AT&T total underestimated the huge increase in household demand for bandwidth. The U-verse product uses the paired DSL product – with speeds generally between 25 Mbps and 40 Mbps – to serve both cable TV and broadband. AT&T quickly found out that this data pipe is too small for homes that want to watch multiple TVs or that today want to watch multiple Netflix streams. AT&T is remedying this by working feverishly to shift TV over to their new DirecTV platform, freeing up the full amount of U-verse bandwidth for Internet access.

We are not too many years away from the time when the myth that most urban markets have at least two broadband competitors will fade away. As household demand for broadband keeps growing there will be fewer and fewer people on DSL, and that exodus will be accelerated by companies like Verizon helping to push DSL customers out the door.

Verizon passes many millions of homes with its fiber-based FiOS. But that network has always been very patchwork in that it will serve one neighborhood while bypassing nearby neighborhoods. There is a slight glimmer in the FiOS story since Verizon recently announced that they are going to greatly expand FiOS in downtown Boston.  Boston is like most east coast cities where Verizon built a lot more fiber in the surrounding suburbs while largely ignoring the more costly construction in the city. But after having not built FiOS for a while the company surprised everybody by announcing this new fiber initiative.

The bottom line is that DSL is in its death throes. But like dial-up (which is still sold to millions of homes) there is likely to be DSL around for as long as the telcos don’t physically tear down the copper or pull the plug on the electronics. But it’s clear that Verizon, at least, is hoping for DSL to fade away sooner rather than later.

Speed Matters

slow-downPark Associates just published the results of a survey that looks at why consumers switch broadband providers. The survey showed that 9% of households changed broadband providers last year. The company surveyed households that had changed and categorized their responses into seven categories.

It turns out that the number one reason that people changed providers was to get faster speeds and 35% of households listed the need for faster speeds as their primary motivation.

Of course, there are still households that care about price. 18% of households that changed broadband providers did so because they could buy comparable speeds at a lower price. But almost nobody changed providers to accept a slower speeds, even with a savings.

The survey results are backed-up by real world statistics. In most markets in the US today there is still duopoly competition between the cable company and the phone company, with the cable company generally having faster speeds. There has been a steady exodus for years from phone company DSL to cable modems and in 2015 alone the cable companies added 3 million new customers, while DSL continued to decline.

There is a lesson to be learned from these statistics. While the news is full of talk of gigabit fiber networks, not all fiber networks offer blazingly fast speeds. I know of a number of owners of fiber networks that offer speeds that are not much faster than the cable modem products they compete against. Those networks are not capitalizing on their technological advantage.

One thing that most of my clients have learned over the years is that increasing customer speeds doesn’t cost them very much. I’ve followed up on hundreds of network speed increases and almost universally ISPs report that customers use the Internet the same after a speed increase than before – but customers always say they love the faster speeds. And so, to the extent that faster speeds don’t cost much to implement, a fiber owner ought to always have speeds faster than their cable competitors – why would you not?

One issue that continues to confound customers is the different between advertised speeds and actual speeds. I have one client whose basic product on fiber is 30 Mbps and they deliver that speed very solidly all of the time. They are competing against a cable modem product advertised as ‘up to 60 Mbps’. And yet, in that market, the fiber product is demonstrably faster than the cable modem product. But this advertising discrepancy creates confusion in the minds of consumers.

There might be some help coming in this area since the FCC will soon be requiring the large broadband providers to disclose more information to customers about their broadband products. But I guess we’ll have to wait to see how truthful they really become.

My company conducts surveys and one thing we’ve found is that that only a small percentage of consumers actually know the speed they are supposed to be getting or the speed they are actually getting. But what they do understand is when their speed is not fast enough to do what they are trying to do.

We know that overall that the amount of data used by the average household has been doubling about every three years. What that means is that people will buy a data product and within a relatively short number of years they will start bumping against that speed and realize they need something faster.

I think the cable companies understand this issue. Comcast has upped speeds across the boards for data customers at least twice this decade that I can recall. Increasing speeds periodically stops customers from hitting their speed ceiling and keeps them happy with the product they have. If you are operating a network that can provide faster speeds you should be increasing speeds from time to time also. You don’t want many of your customers to be in the 9% looking for a new broadband provider.

Thinking Exponentially

Exponential GrowthWe are at an interesting point in human history where there is rapid growth in a number of different areas that are all having or will soon have a profound impact on society. And by rapid growth I am talking about exponential growth, because most people assume that even fast growth is straight-line and linear.

Most things around us grow over time with linear growth, which is growth done at a consistent rate. But exponential growth happens with a repeated multiplication of the rate of growth. Linear growth results in straight-line growth while exponential growth results in explosive growth.

An example of exponential growth is the old Chinese story about a man who did a favor for an emperor and asked to be paid in rice. He wanted one grain the first day, two grains the second day, and so on for a month. The emperor though this sounded like a great idea until a few weeks into the process it became clear that he would soon be paying with all of the rice in China.

We’ve had a few examples of exponential growth in the US economy in the past. Consider the growth of televisions in households. These went from being in a very few homes in the late 40’s until practically every home in the country had a TV by the mid-50s.

We have one example of exponential growth in the broadband industry which is that the growth in the amount of data downloaded by the average home, which has been doubling roughly every three years since the late 80s. And we’ve seen the result of this growth manifested by the quickness with which any new broadband technology gets overwhelmed and obsolete within a relatively short time after hitting the market. Consider DSL. When we all got our first 1 Mbps DSL connection it felt extravagantly fast. I remember talking about how wonderful it felt to have a T1 in my house. But that excitement faded quickly when within a few short years that DSL felt inadequate.

The human mind does not easily grasp the idea of exponential growth. I’ve seen this many times with network planning. Engineers will plot out expected network growth linearly and will increase the size of the data electronics on a network only to find out, often within a very short time that the new facilities are full and overloaded. Exponential growth almost always surprises us.

We are now sitting at a time when there are a number of examples of exponential growth happening in different technology areas. Ray Kurzweil was one of the first to identify the impact of exponential growth in today’s world back in 2006 in his book The Singularity is Near. In that book he discussed five paradigms in the computing world that had grown exponentially in the 20th century: electromechanical, relay, vacuum tubes, discrete transistors, and integrated circuits.

Kurzweil has made very good predictions about the last decade and has made the following predictions about the next few decades:

  • Within a decade from now solar power will generate the majority of the world’s electricity;
  • By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.
  • By the 2020s, most diseases will have been cured by nanobots in our blood stream. Computers will easily pas the Turing test. Self-driving cars will be the norm and people won’t be allowed to drive on highways.
  • By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.
  • By the 2040s, computers will be a billion times more capable than biological intelligence. Nanotech will enable us to make food out of thin air.
  • By 2045, people will be able to multiply our intelligence a billionfold by linking wirelessly from our brains to the cloud.

These predictions are all amazing and speak about a near-future world that is very different than today. But what they speak about even more is the power of exponential growth. In order for these predictions to be realized there needs to continual exponential growth in the fields of computing, artificial intelligence, biological sciences, etc.

Our Degrading Networks

cheetah-993774Lately I’ve been hearing a lot of stories about rural broadband with a common theme. People say that their broadband has been okay for years and is now suddenly terrible. This seems to be happening more on DSL networks than with other technologies, but you hear this about rural cable networks as well.

There are several issues which contribute to the problem – more customers sharing a local network, increasing data usage for the average customer, and a data backbone feeding the neighborhood that is has grown too small for the current usage.

Broadband adoption rates have continued to grow as more and more households find it mandatory to use broadband. And so neighborhoods that once had 50% of homes using a local network will have grown to more than 70%. That alone can stress a local network.

Household broadband usage has also been increasing. A lot of the new usage is streaming video. This video doesn’t just come from Netflix but there is now video all over the web and social media. It’s hard to go to the web today and not encounter video. As more and more customers are using video at the same time they can quickly be asking for more aggregate data in a network than the network can supply. Where the demand has outstripped network capability there is a remedy available for most situations and increasing the size of the bandwidth pipe feeding a neighborhood will typically fix the problem.

Let’s look at an example. Consider a neighborhood that has 100 DSL customers and that is fed by a DS3 (45 Mbps). In the days before a lot of streaming video such a neighborhood probably felt like it had good broadband. The odds against more than a few customers trying to download something really large at exactly the same time meant that there was almost always enough bandwidth for everybody.

But today people want to watch streaming video. Netflix recommends that there be at least a 1.5 Mbps continuous stream available to watch a video. So up to about 30 households in this theoretical neighborhood could watch Netflix at the same time. That math is not quite that linear as I will explain below, but you can see how the math works. The problem is that it’s not hard to imagine that with 100 homes that there would be demand for more than 30 video streams at the same time, particularly when considering that some households want to watch more than one Netflix stream at the same time.

The problems in this theoretical neighborhood are made worse by what is called packet loss. Packet loss occurs when a network tries to download multiple signals at the same time. When that happens some packets are accepted, but some are just lost. Our current web protocols correct this problem by sending out a message from the receiving router asking for the retransmission of missing packets, and they are sent again. As networks get busy the amount of contention and packet loss increases and the percentage of the packets that are sent multiple times increases. And so as networks get busy they grow increasingly less efficient. Where this theoretical neighborhood network can theoretically accommodate 30 Netflix streams, in real life it might actually only handle 20 due to the extra traffic caused by resending lost packets.

This theoretical network has grown over time from being efficient to now being totally inadequate. Customers who were once happy with speeds are now unable to watch Netflix on an average evening. The network will still function great at 4:00 AM when nobody is trying to use it, but during the times when people want use it, it will fail more often than not. The only way to fix this theoretical neighborhood is increase the backbone from 45 Mbps to something much larger. And that requires capital – and we all know that the large telcos are not putting capital into copper neighborhoods.

Cellular companies have been dealing with these growth issues for a number of years now. Cellular networks are seeing annual growth between 60% and 120% per year, meaning that any improvement in the network is quickly eaten up by increased demand. But t’s a much bigger issue to keep upgrading all landline networks. While there are just over 200,000 cell towers in the US there must be several million local broadband backbone connections into neighborhoods. These range from tiny backbones with a few T1s feeding a few homes up to networks with a few hundred people sharing a larger backbone. Upgrading that many networks backbone connection means a huge capital outlay is needed to maintain acceptable levels of service.

Unfortunately my theoretical neighborhood is not really all that theoretical. The big increase in landline broadband demand is now starting to max out the bandwidth utilization in many neighborhoods. The FCC says that there are 34 million people in the country that don’t have adequate broadband today. But with the rate that neighborhood networks are degrading, that number of households with inadequate broadband is growing rapidly – and not get smaller as the FCC is hoping.

Two Tales of DSL

DSL modemI had to chuckle the other day when I saw two articles about DSL that were going in opposite directions. In the first announcement AT&T announced that they are phasing the TV product out of their U-verse product. The same day I saw an announcement from Frontier that they are entering the video-over-DSL business in a big way.

The technology that is being used in both cases is paired DSL. This means putting DSL onto two copper phone lines and then using them together to create one data path. Under ideal conditions, meaning perfect copper, the technology can deliver about 40 Mbps through about 7,000 feet of copper. But of course, there is very little perfect copper in the real world and so actual speeds are typically somewhat slower than that.

In AT&T’s case this change makes sense. They purchased DirecTV and they are going to use the satellite platform to deliver the cable TV signal. This will free up the DSL pipe to be used strictly for data and VoIP, and this will extend the competitive ability of the DSL technology. In most cases the company can deliver 20 Mbps – 40 Mbps to homes that are close enough to a DSLAM. I’m sure that AT&T has been finding it increasingly difficult to deliver data and cable together on one DSL pipe.

The downside for AT&T is that not everybody can get DirecTV. Some people live where they can’t see the satellite and many people in apartments aren’t allowed to stick up a dish. So this isn’t a perfect solution for AT&T, but the increased data speeds probably mean a bigger potential customer base for the U-verse product.

Frontier is coming at this from a different direction. The company has seen declines in revenue as voice customers continue to drop off the network and as they continue to lose DSL customers to cable companies. The company saw a 1% decline in revenue just in the fourth quarter of 2015.

To try to generate new sales the company just announced this week that they are entering the business that AT&T is abandoning. The company launched IPTV in the 4th quarter of last year and announced that they are going to extend this to 40 other markets and pass 3 million customers with the product. They are going to use the same paired DSL as AT&T U-verse and will offer video on the DSL.

Frontier is hoping that this move, which will give them the triple play bundle will bring in more broadband customers and bolster both revenues and the bottom line. The company also expects to get a nice bump from finally closing on their purchase of Verizon properties in Texas, Florida and California. It is going to be a busy year for the company as they also hope to add 100,000 new broadband customers this year for the first of six years of an expansion funded by the CAF II funds from the FCC.

I have a lot of sympathy for a company like Frontier. They have purchased a lot of rural markets that have been neglected for years by Verizon and which don’t have very good copper. Where many smaller telcos are converting all of their rural areas to fiber, Frontier does not have access to the capital needed to do that, nor would they want to suffer through the earnings hit that comes from spending huge amounts on capital.

But the problem for all DSL providers is that within a few years the demand for broadband speed is going to exceed their capabilities. The statistic that I always like to quote is that household demand for broadband speeds doubles about every three years. This has happened since the earliest days of dial-up. One doesn’t have to chart out too many years in the future when the speeds that can be delivered on DSL are not going to satisfy anybody. The CAF II money is only requiring DSL that will be at least 10 Mbps download, which is already inadequate today for most families. But even the 20 – 40 Mbps paired-DSL is going to feel very slow when cable companies have upgraded to minimum speeds of 100 Mbps or faster. And if that DSL is also carrying video along with the data it’s going to feel really slow. I would not want to be one of the companies still trying to make copper work for broadband a decade from now.

The Widening Rural Broadband Gap

FunkstownThe gap between urban and rural broadband is widening quickly these days. Up until the late 1990s access to the Internet was the same for everybody using dial-up. But within a short period of time in the late 90s both DSL and then cable modems hit the market.

I remember back in the early 90s how jealous I was of friends who had Internet access at work using a T1. But then DSL became available and all of a sudden we could all get the equivalent of T1 access at our homes. At the time DSL felt amazingly fast, and it was at 20 – 30 times the speed of dial-up. The big limitation with dial-up was that it took several minutes to see a picture that accompanied a news story and it could take hours to download a software update. But DSL and cable modems fixed those problems and images became much faster and file downloads didn’t take half of the night.

But these new technologies were only available in towns and cities and that was the start of the urban / rural broadband gap. Over the years both technologies got faster. In most big cities it became routine to be able to buy DSL at speeds up to 15 Mbps, a nice improvement over the first generation. But cable modems improved even more and over the last decade became capable of speeds much faster than DSL.

What I found odd was that for the longest time the cable companies didn’t take advantage of their extra capabilities. They offered cable modem speeds that were just slightly faster than DSL. I can remember the CEO of Comcast telling people that they would supply the speed that people ‘needed’. But even at 15 Mbps the speeds were 250 times faster than the dial-up that many people rural people were still stuck with.

But over the last five years the cable companies woke up and started unilaterally raising speeds to be faster than DSL, and in doing so they started capturing the vast majority of the market. It was hard to justify staying on 6 Mbps DSL if you could get a 25 Mbps cable modem for the same price. The cable companies have generally offered speeds over the last five years up to 100 Mbps, although the vast majority of urban customers have opted for something slower. But even 25 Mbps is 450 times faster than dial-up.

Not all rural people have had dial-up as their only option. There have been several satellite companies that offered faster speeds, but the service was really expensive and there was so much latency in the signal that a lot of things other people could do on the Internet are not possible on a satellite connection. So a lot of rural people still use dial-up – or often just go without a connection – because on today’s web, dial-up can do little more than read emails.

In the last year or so the cable companies have really kicked it up a notch and they clearly now are competing with speed – probably as a way to fight off having somebody else build fiber. Late last year Comcast doubled the speed on my home connection from 50 Mbps to 100 Mbps, an eye-opening 1,800 times faster than dial-up.

And the cable companies aren’t finished. They are now talking about upgrading to DOCSIS 3.1 which will enable them to offer speeds up to a gigabit. But that is not the real news concerning the new technology, because Comcast says that they plan to increase speeds across the board again. So my 100 Mbps connection might become 150 Mbps to 200 Mbps. Or 3,500 times the speed of dial-up.

But there are cities that are really lucky and which have widespread gigabit speeds. Google and a few others are using fiber to provide a real competitor to cable modems. And customers on a gigabit are nearly 18,000 times faster than dial-up.

So rural folks with no broadband alternative have seen the people in the towns and cities around them climb over time from 20 times faster up to many thousands of times faster. I really don’t think most urban people understand how colossally terrible it is to be on dial-up. They remember all of the things that they could do on dial-up in the 90s and they don’t stop to think how the whole web has migrated to video. Imagine trying to look at Facebook or Pinterest or any other popular site on dial-up, or even on 1 Mbps rural DSL and you can quickly understand why rural areas are getting desperate and are willing to do almost anything to get faster broadband.

How Many Homes Can’t Get Broadband?

cheetah-993774The FCC periodically puts out some very high-level statistics that talk about the state of broadband in the US. They issued their annual broadband report in January and made the following high-level announcements:

  • 39% of rural households don’t have access to broadband that meets the FCC definition of 25 Mbps download and 3 Mbps upload.
  • 4% of urban households don’t have access to those speeds.
  • 41% of schools still do not have 100 Mbps download speeds.
  • Only 9% of schools have 1 Gbps broadband.

I looked deeper into how the FCC counts these various numbers to try to make some sense of them, and the following is what I discovered.

First, they followed the Census definition of urban and rural areas. The Census defines urban areas in one of two ways. One definition of an urban area is a defined geography with more than 50,000 people. It can also be a cluster of smaller towns in a fairly adjacent geographical area that has more than 2,500 people but less than 50,000. In the Census estimate for 2015 the urban areas include about 260 million people. Anything that is not urban is rural, which in 2015 is about 61.5 million people.

If a rural county has a county seat with more than 50,000 people, the county seat would counted as urban and the rest of the county would be rural. Otherwise the whole county is normally counted as rural. But in big urban areas, like the northeast corridor, many areas that you would consider as rural are included in the urban areas. So there is a significant amount of crossover at the edges of these two types of areas. For instance, for broadband purposes we know that somebody that lives 50 feet past where the cable company stops at a county seat might not be able to get broadband, but they might often still be counted as urban.

The raw data that backs up these statistics is still self-reported to the FCC by the ISPs annually on Form 477. On this form telcos and cable companies must report the speeds that they deliver to census blocks, which are census-defined areas of 500 to 900 homes. I looked through this mass of data and there are a huge number of census blocks that are reported at broadband speeds like 3 Mbps or 6 Mbps download. In most cases this is DSL, and our experience is that a whole lot of people in rural DSL areas can’t really get those speeds. That is the ‘advertised’ speed or the theoretical speed. This has always been an issue and I’ve always contended that there are far more homes that can’t get broadband than are reported by Form 477.

Using these FCC numbers means that there are about 24 million people (or 10 million homes) in the rural areas that can’t get the FCC’s defined broadband speeds. While the 4% of urban areas that can’t get fast broadband sounds small, it still equates to 10.4 million people or 4.3 million homes. So what the FCC numbers are really saying is that there are 34 million people and 14.7 million homes in the country that can’t get an FCC-define broadband connection.

I am positive that this number is conservatively low. Census blocks are not assigned by nice political boundaries and there are huge numbers of census blocks that cover both towns and country areas. There has to be many homes that are in census blocks where some of the people can get the speeds shown on Form 477 while others can’t. My guess is that there must be additional millions of people that supposedly can get broadband but that really can’t. Even in towns anybody that lives right past where the cable TV network stops is not going to get much broadband.

The FCC says that they are solving part of the rural broadband problem with CAF II funding which is supposed to bring faster connections to 3.6 million of these homes. But those funds only require upgrades to technology that will achieve 10 Mbps download and 1 Mbps upload. That program is not going to remove any homes from the list of those that can’t get broadband.

I really hate to see public announcements that talk in nationwide percentages instead of numbers. This always makes it feel like they are trying to pull something over on us. I had to dig really hard to go one level behind the one-page press release – and that doesn’t really help the public to understand the situation. Much more useful would have been detailed tables by geographic areas that let people see the state of broadband in their area. I suspect they don’t do that because then many of the problems with carrier self-reporting would be more obvious.

Are We Really Funding More DSL?

DSL modemRecently while speaking at the National Association of Regulatory Utility Commissioners (NARUC), AT&T CEO Randall Stephenson told the attendees that AT&T’s DSL technology is obsolete. This is a rare admission of the truth from AT&T, which has been less than forthcoming over the years about its broadband business.

And it’s a pretty interesting quote from a company that last year accepted $427 million in CAF II funding from the FCC to expand broadband in rural markets. That money is supposedly going to be used to upgrade rural customers to be able to receive at least 10 Mbps download and 1 Mbps upload speeds. CenturyLink and Frontier plan to spend their federal assistance money by expanding DSL. I think it’s widely assumed that AT&T will also use the money for DS. But we can’t be certain that they aren’t planning to instead use that money to bring cellular wireless to rural homes, against the intentions of the FCC.

To be fair to Stephenson, his response was answering a question about how regulators should look at new technology cycles. Stephenson pointed out that technology cycles have shortened over the years. When DSL was first introduced it was expected to be good for about 10 – 15 years, but today the cycles for new technology have shortened to 5 years – with his example being the transition between 3G and 4G wireless.

Stephenson is right about the speed at which broadband technologies are improving. Since the introduction of DSL we have seen cable modems go through several generations of improvements and in 2016 we are seeing the first widespread roll-out of DOCSIS 3.1 and gigabit speeds from cable companies. And in that same time frame we have seen the development and the maturation of fiber technologies for serving homes. From a performance perspective DSL has been left in the dust.

AT&T certainly still has a lot of DSL in service. But it’s hard to decipher AT&T’s broadband statistics because they lump all broadband customers together. This has gotten more confusing since they picked up DirecTV, which sells satellite broadband. AT&T has been further making a distinction between traditional DSL customers and U-verse customers, most of which are served by bonding two pairs of copper together and using two DSL circuits. But supposedly within the U-verse numbers are also customers on fiber, which many analysts suspect are MDUs or small greenfield fiber trials that AT&T has done over the years.

In the fourth quarter of 2015 AT&T announced a net gain of 192,000 IP broadband customers, which is a mix of the three different types of broadband customers. If AT&T is like Verizon and CenturyLink they have been losing traditional DSL customers at a torrid pace, so it’s hard to know what to make of that number. Are they finally adding some FTTP customers?

But back to DSL. Stephenson is right. At best, a DSL service on a single copper line can deliver perhaps 20 Mbps of data – but conditions are rarely ideal and in the real world DSL is generally a lot slower than that. But even if people could get 20 Mbps from new DSL it’s obsolete because that is no longer considered as broadband.

It’s a shame that the FCC is going to invest billions in DSL at a time when the large telcos were never going to make those investments on their own. The CAF II funds will channel billions of dollars to the DSL vendors for one last hurrah before the technology hits the dust heap. Without the CAF II money one can imagine the DSL equipment market fading away.

While CAF II is a huge gift to the companies that sell DSL equipment – it’s going to be a long-term curse to people that will be upgraded with CAF II funding. They are going to get upgraded to DSL in a fiber world and the telcos are going to check these areas off as upgraded and needing no more investment. A lot of the first DSL built in the 90s is still working in the network, and sadly we are probably going to find a lot of CAF II DSL still working in rural America twenty years from now.

Big Government and Broadband

Capitol_domeOne of the platforms of Hillary Clinton’s campaign is to create a 5-year $275 billion infrastructure plan that would, among other things, foster faster broadband for rural America. The plan would also pay for crumbling roads and bridges and other infrastructure. I’ve seen estimates that as a country we have a several trillion dollar infrastructure deficit, and so this plan would be the proverbial drop in the bucket towards bringing our infrastructure back to where it needs to be. But it’s a start and is better than doing nothing.

This plan leads me to speculate on the role that big government might be able to play in solving our broadband needs. What might the US government do with billions of dollars aimed at improving broadband?

We’ve seen two previous big federal broadband programs and the results have not been very good. First was the billions that were part of the broadband stimulus package. This money was used mostly to create middle mile fiber – that is fiber that stretches between communities. Some of that fiber has been used to get better broadband to the last mile, but the vast majority of that investment has not benefitted a whole lot of people other than the cellular companies who use that fiber to get cheaper access to cell towers.

The stimulus money also put a lot of emphasis on getting fiber to ‘anchor institutions’ which it defined as schools, libraries, city halls, and other government institutions. So we ended up with rural fiber networks that serve only a handful of these anchor institutions, but not to the neighborhoods surrounding these locations. As I’ve written many times, bringing fiber only to anchor institutions is actually a disincentive to get fiber everywhere because it removes these large bandwidth customers from being potential customers of locally built fiber networks.

To give the federal government a little credit, the stimulus money popped onto the scene with no notice and there was no plan in place or even people in place to review the various grant proposals. There were some last mile networks financed from the stimulus money and I’m sure those communities are thrilled to have been the lucky few that benefitted from the many billions in spending.

More recently we have seen the FCC throw billions of dollars at the large telcos with the CAF II funding. They have given Frontier, AT&T, and CenturyLink billions of dollars to improve rural DSL broadband to 10 Mbps. And gave them six years to get it done. This is such a bad idea on so many levels that you’ll have to go and read my other rants on this. But this is mostly the equivalent of pouring money onto the ground and it going to bring no real broadband to anybody. This is a classic case of a government boondoggle that spends a lot of money and accomplishes almost nothing useful.

So what might the feds do if they were to give out more billions? One thing they will probably do is to overspend on broadband like was done with the stimulus money. Those grants included rules that inflated the cost of building fiber. The companies taking the money had to do expensive environmental and historical studies, something that makes no sense for fiber that is placed into pre-existing road rights-of-ways. And they required the contractors building the networks to use prevailing wages, which mostly meant paying large city wages for projects that could have normally been done in rural areas for a lot less. Altogether these extra requirements probably added 15% – 20% to the cost of the projects.

What is scary is that in order to shovel the money out the door quickly the federal government might either give the money to the incumbents as corporate welfare or else end up backing projects like more middle mile that largely build fiber to nowhere.

The most cost effective way to use federal money would be to give it to local groups in some sort of matching arrangement. This would stretch the federal money the farthest and would also enable communities to find the best local broadband solution. Some communities might tackle this directly using bond money for the match, while many others would seek out public/private partnerships with local carriers. And the small telcos and coops around the country could use this money to extend their fiber networks – many of them have already showed us how to bring fiber to remote places.

I have no idea if there will even be another big pile of federal money aimed at broadband – it’s a long way from a campaign platform to reality. But if this does happen I hope that this time they have a better plan that would use the money to build last mile fiber to rural communities – the only permanent solution to closing the rural broadband gap. I hope they take the time to listen to the industry and this time that they do it right – or at least better.