$20.4 Billion in Broadband Funding?

Chairman Ajit Pai and the White House announced a new rural broadband initiative that will provide $20.4 billion over ten years to expand and upgrade rural broadband. There were only a few details in the announcement, and even some of them sound tentative. A few things are probably solid:

  • The money would be used to provide broadband in the price-cap service areas – these are the areas served by the giant telcos.
  • The FCC is leaning towards a reverse auction.
  • Will support projects that deliver at least 25/3 Mbps broadband.
  • Will be funded from the Universal Service Fund and will ‘repurpose’ existing funds.
  • The announcement alludes to awarding the money later this year, which would be incredibly aggressive.
  • This was announced in conjunction with the auction of millimeter wave spectrum – however this is not funded from the proceeds of that auction.

What might it mean to repurpose this from the Universal Service Fund?  The fund dispersed $8.7 billion in 2018. We know of two major upcoming changes to the USF disbursements. First. the new Mobility II fund to bring rural 4G service adds $453 million per year to the USF. Second. the original CAF II program that pays $1.544 billion annually  to the big telcos ends after 2020.

The FCC recently increased the cap on the USF to $11.4 billion. Everybody was scratching their head over that cap since it is so much higher than current spending. But now the number makes sense. If the FCC was to award $2.04 billion in 2020 for the new broadband spending, the fund would expand almost to that new cap. Then, in 2021 the fund would come back down to $9.6 billion after the end of CAF II. We also know that the Lifeline support subsidies have been shrinking every year and the FCC has been eyeing further cuts in that program. We might well end up with a fund by 2021 that isn’t much larger than the fund in 2018.

There are some obviously big things we don’t know. The biggest is the timing of the awards. Will this be a one-time auction for the whole $20.4 billion or a new $2 billion auction for each of the next ten years? This is a vital question. If there is an auction every year then every rural county will have a decent shot at the funding. That will give counties time to develop business plans and create any needed public private partnership to pursue the funding.

However, if the funding is awarded later this year in one big auction and then disbursed over ten years, then I predict that most of the money will go again to the big telcos – this would be a repeat of the original CAF II. That is my big fear. There was great excitement in rural America for the original CAF II program, but in the end that money was all given to the big telcos. The big telcos could easily promise to improve rural DSL to 25/3 Mbps given this kind of funding. They’d then have ten years to fulfill that promise. I find it worrisome that the announcement said that the funding could benefit around 4 million households – that’s exactly the number of households covered by the big telcos in CAF II.

What will be the study areas? The CAF II program awarded funding by county. Big study areas benefit the big telcos since anybody else chasing the money would have to agree to serve the same large areas. Big study areas means big projects which will make it hard for many ISPs to raise any needed matching finds for the grants – large study areas would make it impossible for many ISPs to bid.

My last concern is how the funds will be administered. For example, the current ReConnect funding is being administered by the RUS which is part of the Department of Agriculture. That funding is being awarded as part grants and part loans. As I’ve written many times, there are numerous entities that are unable to accept RUS loans. There are features of those loans that are difficult for government entities to accept. It’s also hard for a commercial ISP to accept RUS funding if they already carry debt from some other source. The $20.4 billion is going to be a lot less impressive if a big chunk of it is loans. It’s going to be disastrous if loans follow the RUS lending rules.

We obviously need to hear a lot more. This could be a huge shot in the arm to rural broadband if done properly – exactly the kind of boost that we need. It could instead be another huge giveaway to the big telcos – or it could be something in between. I know I tend to be cynical, but I can’t ignore that some of the largest federal broadband funding programs have been a bust. Let’s all hope my worries are unfounded.

The National Broadband Penetration Rate

My firm CCG Cinsulting recently completed residential surveys in three cities where I found broadband penetration rates of between 92% and 93%. Those are the highest broadband take rates I’ve ever seen. If I only encountered one city with a penetration rate that high, I would assume that there is some reason why more people in that city have broadband. But now that I’ve seen three cities with the same high penetration rate I started to ask myself different questions. How unusual is it for cities to have penetration rates at that level? What penetration rates should I expect to see today in cities?

I first thought through the survey process. I’ve always found a well-designed survey to produce reliable results for questions like quantifying the market share of the major ISPs. I’ve worked with a few cities that had detailed customer penetration data from franchise fee reporting and in those cities our surveys closely matched that data. I’ve also worked in a few cities where we’ve done several surveys in a relatively short period of time and got nearly the same results from multiple surveys. I’ve come to trust survey results – as long as you follow good practices to make sure the survey is conducted randomly the results seem to be reliable.

I then turned to published industry statistics on the number of broadband customers to see what those told me. The two most cited statistics come from USTelecom and Leichtman Research Group (LRG). As of the end of 2017 US Telecom claimed that 79% of homes had a wired broadband connection, defined as any connection that is faster than 200 kbps, which eliminates dial-up. Leichtman Research Group claimed that 84% of homes had a wired broadband connection at the end of 2017 based upon a nationwide survey. Those numbers are significantly different. Luckily both groups also publish counts of national broadband subscribers, providing a second way to compare the two.

In the USTelecom Industry Metrics and Trends report from March 2018, US Telecom said there was 100 million residential broadband ‘connections’ at the end of 2017. They claim total broadband connections of 109 million when adding businesses.

Leichtman Research Group counts broadband ‘subscribers’ every quarter by gathering the statistics from the financial reports from the largest ISPs. LRG includes all of the big ISPs from Comcast down to Cincinnati Bell with 300,000 broadband customers. LRG claims these large companies represent about 95% of the whole broadband market. LRG counted 95.8 million total broadband customers at the end of 2017 – a count that includes businesses. Adjusting to add the remaining 5% of the market, LRG shows 100.8 million total broadband subscribers, including businesses – over 8 million less than what USTelecom counts.

That’s an astounding difference, and it’s obvious the two groups aren’t counting broadband customers the same way. There must be a difference between ‘subscribers’ and ‘connections’.

I’ve only come up with one reason why the counts would be that different. A lot of apartment complexes and business high rises today are served by a big data pipe provided to the landlord, who then provides broadband to tenants. I’m guessing that the LRG numbers considers the big data pipe to be one broadband customer. In most cases the LRG numbers come from quarterly financial reports to shareholders, and my guess is that ISPs consider a subscriber to be an entity that recieves a bill for broadband service.

I further postulate that USTelecom counts the number of tenants in those same buildings as ‘connections’. We know that big ISPs often do that. For example, AT&T agreed with regulators to pass 12.5 million new residences and businesses with fiber as part of their merger with DirecTV. It’s been clear that one of the big components of those new passings comes from units in apartment complexes. If AT&T was to build a fiber past an apartment complex they could count them as passings to satisfy the FCC without having had to get them as a customer.

The other component of the penetration rate equation is the number of US households. That number is just as confusing. I found a lot of different estimates of the number of US households. For example, the US Census says there was 137.4 million total living units at the end of 2017, with 118.8 million occupied living units. Statistica estimates 127.6 million households at the end of 2018. YCharts shows there are 122.6 million households at the end of 2018. That’s a wide range of ways to count potential residential customers in the country.

Finally, when trying to estimate the broadband penetration rates to be expected in cities, you have to back out the rural homes that can’t get broadband from the equation. That’s also a difficult number to pin down and I can find estimates that range from 6 million to 12 million homes with no broadband alternative.

The bottom line is that I don’t really know what I should expect as an urban broadband penetration rate. I can do math that supports a typical urban penetration rate of 92%. Mostly what I learned from this exercise is how careful I need to be when citing national broadband statistics – if you play it loose you can get almost any answer you want.

Setting Broadband Rates

One of the more interesting things about being a consultant is that I often get to work with new ISPs. One of the topics that invariably arises is how to set rates. There is no right or wrong answer and I’ve seen different pricing structures work in the marketplace. Most rate structure fit into one of these categories:

  • Simple rates with no discounts or bundling;
  • Rates that mimic the incumbent providers;
  • High rates, but with the expectation of having discounts and promotions;
  • Complex rates that cover every imaginable option.

Over the years I’ve become a fan of simple rate structure for a couple of reasons:

  • Simple rates make it easy for customer service reps and other employees.
  • It’s easy to advertise simple rates: “Our rates are the same for everybody – no gimmicks, no tricks, no hidden fees”.
  • It’s easy to bill simple rates. Nobody has to keep track of when special promotions are ending. Simple rates largely eliminate billing errors.
  • It eliminates the process of having to negotiate prices annually with customers. That’s an uncomfortable task for customer service reps. There are customers in every market who chase the cheapest rates and the latest special. Many facility-based ISPs have come to understand that such customers are not profitable if they only stay with the ISP for a year before chasing a cheaper rate elsewhere.
  • It’s easier for customers. Customers appreciate simple, understandable bills. Customers who don’t like to negotiate rates don’t get annoyed when their neighbors pay less than them. Simple rates make it easy to place online orders.

As a consumer I like simple rates. When Sling TV first entered the market they had two similar channel line-ups to choose from, with several additional options on top of each basic package. Since they were the only online provider at the time, I waded through the process of comparing the packages. But I was really annoyed that they made me do so much work to buy their product, and when a simpler provider came along I jumped ship. To this day I can’t figure out what Sling TV gained from making it so hard to compare their options.

ISPs can be just as confusing. I was looking online the other day at the packages offered by Cox. They must have fifty or sixty different triple and double play packages online and it’s virtually impossible for a customer to wade through the choices unless they know exactly what they want.

There are fiber overbuilders who are just as confusing. I remember looking at the pricing list of one of the earliest municipal providers. They had at least a hundred different speed combinations of upload and download speeds. I understand the concept of giving customers what they want, but are there really customers in the world who care about the difference between speed combinations like 35/5 Mbps, 38/5 Mbps, or 35/10 Mbps? I know several smaller ISPs who have as many options as Cox and have a different product name for each unique combination of broadband, video, and voice.

There is such a thing as being too simple. Google Fiber launched in Kansas City with a single product, $70 gigabit broadband. They were surprised to find that a lot of customers wouldn’t consider them since they didn’t offer video or telephone service. Over a few years Google Fiber introduced simple versions of those products and now also offer a 100 Mbps broadband product for $50. Even with these product additions they still have one of the simplest product lineups in the industry – and they are now attractive to a lot more homes.

I know ISPs with complicated rates that have achieved good market penetration. But I have to wonder if they would have done even better had they used simpler rates and made it easier on their staffs and the public.

How We Use More Bandwidth

We’ve known for decades that the demand for broadband growth has been doubling every three years since 1980. Like at any time along that growth curve, there are those that look at the statistics and think that we are nearing the end of the growth curve. It’s hard for a lot of people to accept that bandwidth demand keeps growing on that steep curve.

But the growth is continuing. The company OpenVault measures broadband usage for big ISPs and they recently reported that the average monthly data use for households grew from 201.6 gigabytes in 2017 to 268.7 gigabytes in 2018 – a growth rate of 33%. What is astounding is the magnitude of growth, with an increase of 67.1 gigabytes in just a year. You don’t have to go back very many years to find a time when that number couldn’t have been imagined.

That kind of growth means that households are finding applications that use more bandwidth. Just in the last few weeks I saw several announcements that highlight how bandwidth consumptions keep growing. I wrote a blog last week describing how Google and Microsoft are migrating gaming to the cloud. Interactive gaming already uses a significant amount of bandwidth, but that usage is going to explode upwards when the machine operating the game is in a data center rather than on a local computer or game console. Google says most of its games will operate using 4K video, meaning a download speed of at least 25 Mbps for one stream plus an hourly download usage of 7.2 GB.

I also saw an announcement from Apple that the users of the Apple TV stick or box can now use it on Playstation Vue to watch up to four separate video steams simultaneously. That’s intended for the serious sports fan and there are plenty of households that would love to keep track of four sporting events at the same time. If the four separate video streams are broadcast in HD that would mean downloading 12 GB per hour. If the broadcasts are in 4K that would be an astounding 29 GB per hour.

The announcement that really caught my eye is that Samsung is now selling an 8K video-capable TV. It takes a screen of over 80 inches for the human eye to perceive any benefit from 8K video. There are no immediate plans for anybody to broadcast in 8K, but the same was true when the first 4K TVs were sold. When people buy these TVs, somebody is going to film and stream content in the format. I’m sure that 8K video will have some improved compression techniques, but without a new compression scheme, an 8K video stream is 16 times larger than an HD stream – meaning a theoretical download of 48 GB per hour.

Even without these new gadgets and toys, video usage is certainly the primary driver of the growth of household broadband. In 2014 only 1% of homes had a 4K-capable TV – the industry projects that to go over 50% by the end of this year. As recently as two years ago you had to search to find 4K programming. Today almost all original programming from Netflix, Amazon, and others is shot in 4K, and the web services automatically feed 4K speeds to any customer connection able to accept it. User-generated 4K video, often non-compressed, is all over YouTube. There are now 4K security cameras on the market, just when HD cameras have completely replaced older analog cameras.

Broadband usage is growing in other ways. Cisco projects machine-to-machine connections will represent 51% of all online connections by 2022, up from 40% today. Parks and Associates just reported that the average broadband home now has ten connected devices, and those devices all make internet connections on their own. Our computers and cellphone automatically update software over our broadband connections. Many of us set our devices to automatically back-up our hard drives, pictures, and videos in the cloud. Smart home devices constantly report back to the alarm monitoring service. None of these connections sound large, but in aggregate they really add up.

And sadly, we’re also growing more inefficient. As households download multiple streams of music, video, and file downloads we overload our WiFi connection and/or our broadband connection and thus request significant retransmission of missing or incomplete packets. I’ve seen estimates that this overhead can easily average 20% of the bandwidth used when households try to do multiple things at the same time.

I also know that when we look up a few years from now to see that broadband usage is still growing that there will be a new list of reasons for the growth. It may seem obvious, but when handed enough bandwidth, households are finding a way to use it.

Please, Not Another Mapping Debacle

There are numerous parties making proposals to the FCC on how to fix the broken broadband mapping program. Today I want to look at the proposal made by USTelecom. On the surface the USTelecom proposal sounds reasonable. They want to geocode every home and business in the US to create a giant database and map of potential broadband customers. ISPs will then overlay speeds on the detailed maps, by address. USTelecom suggests that defining broadband by address will eliminate the problems of reporting broadband by Census block.

Their idea should work well for customers of fiber ISPs and cable companies. Customer addresses are either covered by those technologies or they’re not. But the proposed new maps won’t do much better than current maps for the other technologies used in rural America for a number of reasons:

  • Telcos that provide rural DSL aren’t going to tell the truth about the speeds being delivered. Does anybody honestly believe that after taking billions of dollars to improve rural DSL that Frontier and CenturyLink are going to admit on these maps that customers in areas covered by CAF II are getting less than 10 Mbps?
  • In the telcos favor, it’s not easy for them to define DSL speeds. We know that DSL speeds drop with distance from a DSLAM transmitting point, so the speed is different with each customer, even with ideal copper.
  • Rural copper is far from ideal, and DSL speeds vary widely by customer due to local conditions. The quality can vary between wires in the same sheathe due to damage or corrosion over time. The quality of the drop wires from the street to the house can drastically impact DSL speeds. Even the inside copper wiring at a home can have a big influence. We also know that in many networks that DSL bogs down in the evenings due to inadquate backhaul, so time of day impacts the speed.
  • What is never mentioned when talking about rural DSL is how many customers are simply told by a telco that DSL won’t work at their home because of one of these reasons. Telcos aren’t reporting these customers as unservable today and it’s unlikely that they’ll be properly reported in the future.
  • Rural fixed wireless has similar issues. The ideal wireless connection has an unimpeded line-of-sight, but many customers have less than an ideal situation. Even a little foliage can slow a connection. Further, every wireless coverage area has dead spots and many customers are blocked from receiving service. Like DSL, wireless speeds also weaken with distance – something a WISP is unlikely or unwilling to disclose by customer. Further, while WISPs can report on what they are delivering to current customers they have no way of knowing about other homes until they climb on the roof and test the line-of-sight.
  • It’s also going to be interesting to see if urban ISPs admit on maps to the redlining and other practices that have supposedly left millions of urban homes without broadband. Current maps ignore this issue.

USTelecom also wants to test-drive the idea of allowing individuals to provide feedback to the maps. Again, this sounds like a good idea. But in real life this is full of problems:

  • Homeowners often don’t know what speeds they are supposed to get, and ISPs often don’t list the speed on bills. The broadband map is supposed to measure the fastest speed available, and the feedback process will be a mess if customers purchasing slower products interject into the process.
  • There are also a lot of problems with home broadband caused by the customer. ISPs operating fiber networks say that customers claiming low speeds usually have a WiFi problem. Customers might be operating ancient WiFi routers or else are measuring speed after the signal has passed through inside multiple walls.

I still like the idea of feedback. My preference would be to allow local governments to be the conduit for feedback to the maps. We saw that work well recently when communities intervened to fix the maps as part of the Mobility Fund Phase II grants that were intended to expand rural 4G coverage.

My real fear is that the effort to rework the maps is nothing more than a delaying tactic. If we start on a new mapping effort now the FCC can throw their hands up for the next three years and take no action on rural broadband. They’ll have the excuse that they shouldn’t make decision based on faulty maps. Sadly, after the three years my bet is that new maps will be just as bad as the current ones – at least in rural America.

I’m not busting on USTelecom’s proposal as much as I’m busting on all proposals. We should not be using maps to decide the allocation of subsidies and grants. It would be so much easier to apply a technology test – we don’t need maps to know that fiber is always better than DSL. The FCC can’t go wrong with a goal of supplanting big telco copper.

Capping the Universal Service Fund

FCC Chairman Ajit Pai recently suggested capping the size of the total Universal Service Fund at $11.4 annually, adjusted going forward for inflation. The chairman has taken a lot of flack on this proposal from advocates of rural broadband. Readers of this blog know that I have been a big critic of this FCC on a whole host of issues. However, this idea doesn’t ive me much heartburn.

Critics of the idea are claiming that this proves that the FCC isn’t serious about fixing the rural broadband problem. I totally agree with that sentiment and this current FCC hasn’t done very little to fix rural broadband. In fact, they’ve gone out of their way to try to hide the magnitude of the rural problem by fiddling with broadband statistics and by hiding behind the faulty data from carriers that come out of the FCC’s broadband mapping effort. My personal guess is that there are millions of more homes that don’t have broadband than are being counted by the FCC.

With that said, the Universal Service Fund shouldn’t be the sole funding source for fixing rural broadband. The fund was never intended for that. The fund was created originally to promote the expansion of rural telephone service. Over time it became the mechanism to help rural telcos survive as other sources of subsidies like access charges were reduced over time. Only in recent years was it repositioned to fund rural broadband.

Although I’m a big proponent for better rural broadband, I am not bothered by capping the Universal Service Fund. First, the biggest components of that fund have been capped for years. The monies available for the rural high cost program, the schools and library fund and for rural healthcare have already been capped. Second, the proposed cap is a little larger than what’s being spent today, and what has been spent historically. This doesn’t look to be a move by the FCC to take away funding from any existing program.

Consumers today fund the Universal Service Fund through fees levied against landline telephone and cellphones. Opponents of capping the fund apparently would like to see the FCC hike those fees to help close the rural broadband gap. As a taxpayer I’m personally not nuts about the idea of letting federal agencies like the FCC print money by raising taxes that we all pay. For the FCC to make any meaningful dent in the rural broadband issue they’d probably have to triple or quadruple the USF fees.

I don’t think there is a chance in hell that Congress would ever let the FCC do that – and not just this Congress, but any Congress. Opponents of Pai’s plan might not recall that past FCCs have had this same deliberation and decided that they didn’t have the authority to unilaterally increase the size of the USF fund.

If we want to federal government to help fix the rural broadband problem, unfortunately the only realistic solution is for Congress to appropriate real money to the effort. This particular Congress is clearly in the pocket of the big telcos, evidenced by the $600 million awarded for rural broadband in last year’s budget reconciliation process. The use of those funds was crippled by language inserted by the big telcos to make it hard to use the money to compete against the telcos.

And that’s the real issue with federal funding. We all decry that we have a huge rural broadband crisis, but what we really have is a big telco crisis. Every rural area that has crappy broadband is served by one of the big telcos. The big telcos stopped making investments to modernize rural networks decades ago. And yet they still have to political clout to block federal money from being used to compete against their outdated and dying networks.

The FCC does have an upcoming opportunity for funding a new broadband program from the Universal Service Fund. After 2020 nearly $2 billion annually will be freed up in the fund at the end of the original CAF II program. If this FCC is at serious about rural broadband the FCC should start talking this year about what to do with those funds. This is a chance for Chairman Pai to put his (USF) money where his mouth is.

Verizon to Retire Copper

Verizon is asking the FCC for permission to retire copper networks throughout its service territory in New York, Massachusetts, Maryland, Virginia, Rhode Island and Pennsylvania. In recent months the company has asked to kill copper in hundreds of exchanges in those states. These range from urban exchanges in New York City to exchanges scattered all over the outer suburbs of Washington DC and Baltimore. Some of these filings can be found at this site.

The filings ask to retire the copper wires. Verizon will no longer support copper in these exchanges and will stop doing any maintenance on copper. The company intends to move people who still are served by copper over to fiber and is not waiting for the FCC notice period to make such conversions. Verizon is also retiring the older DMS telephone switches, purchased years ago from the long-defunct Northern Telecom. Telephone service will be moved to more modern softs switches that Verizon uses for fiber customers.

The FCC process requires Verizon to notify the public about plans to retire copper and if no objections are filed in a given exchange the retirement takes place 90 days after the FCC’s release of the public notice to retire. Verizon has been announcing copper retirements since February 2017 and was forced to respond to intervention in some locations, but eventually refiled most retirement notices a second time.

Interestingly, much of the FiOS fiber network was built by overlashing fiber onto the copper wires, so the copper wires on poles are likely to remain in place for a long time to come.

From a technical perspective, these changes were inevitable. Verizon is the only big telco to widely build fiber plan in residential neighborhoods and it makes no sense to ask them to maintain two technologies in neighborhoods with fiber.

I have to wonder what took them so long to get around to retiring the copper. Perhaps we have that answer in language that is in each FCC request where Verizon says it “has deployed or plans to deploy fiber-to-the-premises in these areas”. When Verizon first deployed FiOS they deployed it in a helter-skelter manner, mostly sticking to neighborhoods which had the lowest deployment cost, usually where they could overlash on aerial copper. At the time they bypassed places where other utilities were buried unless the neighborhood already had empty conduit in place. Perhaps Verizon has quietly added fiber to fill in these gaps or is now prepared to finally do so.

That is the one area of concern raised by these notices. What happens to customers who still only have a copper alternative? If they have a maintenance issue will Verizon refuse to fix it? While Verizon says they are prepared to deploy fiber everywhere, what happens to customers until the fiber is in front of their home or business? What happens to their telephone service if their voice switch is suddenly turned off?

I have to hope that Verizon has considered these situations and that they won’t let customers go dead. While many of the affected exchanges are mostly urban, many of them include rural areas that are not covered by a cable company competitor, so if customers lose Verizon service, they could find themselves with no communications alternative. Is Verizon really going to build FiOS fiber in all of the rural areas around the cities they serve?

AT&T is also working towards eliminating copper and offers fixed cellular as the alternative to copper in rural places. Is that being considered by Verizon but not mentioned in these filings?

I also wonder what happens to new customers. Will Verizon build a fiber drop to a customer who only wants to buy a single telephone line? Will Verizon build fiber to new houses, particularly those in rural areas? In many states the level of telephone regulation has been reduced or eliminated and I have to wonder if Verizon still sees themselves as the carrier of last resort that is required to provide telephone service upon request.

Verizon probably has an answer to all of these questions, but the FCC request to retire copper doesn’t force the company to get specific. All of the questions I’ve asked wouldn’t exist if Verizon built fiber everywhere in an exchange before exiting the copper business. As somebody who has seen the big telcos fail to meet promises many times, I’d be nervous if I was a Verizon customer still served by copper and had to rely on Verizon’s assurance that they have ‘plans’ to bring fiber.

Broadband Statistics 4Q 2018

The Leichtman Research Group has published the statistics of broadband subscribers for the largest ISPs for the year ending December 31, 2018. Following compares the end of 2018 to the end of 2017.

 4Q 2018 4Q 2017 Change
Comcast 27,222,000 25,869,000 1,353,000 5.2%
Charter 25,259,000 23,988,000 1,271,000 5.3%
AT&T 15,701,000 15,719,000 (18,000) -0.1%
Verizon 6,961,000 6,959,000  2,000 0.0%
CenturyLink 5,400,000 5,662,000 (262,000) -4.6%
Cox 5,060,000 4,960,000 100,000 2.0%
Altice 4,118,100 4,046,000 71,900 1.8%
Frontier 3,735,000 3,938,000 (203,000) -5.2%
Mediacom 1,260,000 1,209,000 55,000 4.5%
Windstream 1,015,000 1,006,600 8,400 0.8%
Consolidated 778,970 780,794 (1,824) -0.2%
WOW! 759,600 732,700 26,900 3.7%
Cable ONE 663,074 643,153 19,921 3.1%
Cincinnati Bell 311,000 308,700 2,300 0.7%
98,247,744 95,822,147 2,425,597 2.5%

The large ISPs in the table control over 95% of the broadband market in the country. Not included in these numbers are the broadband customers served by the smaller ISPs – the telcos, WISPs, fiber overbuilders and municipalities.

The biggest cable companies continue to dominate the broadband market and now have 64.3 million customers compared to 33.9 million customers for the big telcos. During 2018 the big cable companies collectively added 2.9 million customers while the big telcos collectively lost 472,000 customers.

What is perhaps most astounding is that Comcast and Charter added 2.6 million customers for the year while the total broadband market for the biggest ISPs grew by only 2.5 million. For years it’s been obvious that the big cable companies are approaching monopoly status in metropolitan areas and these statistics demonstrate how Comcast and Charter, in particular, have a stranglehold over competition in their markets.

CenturyLink and Frontier are continuing to bleed DSL customers. Together the two companies lost 465,000 broadband customers in 2018, up from a loss for the two of 343,000 in 2017.

It’s always hard to understand all of the market forces behind these changes. For example, all of the big cable companies are seeing at least some competition from fiber overbuilders in some of their markets. It would be interesting to know how many customers each is losing to fiber competition.

I’d also love to know more about how the big companies are faring in different markets. I suspect that the trends for urban areas are significantly different than in smaller markets. I know that deep data analysis of the FCC’s 477 data might tell that story. (hint, hint in case anybody out there wants to do that analysis!)

I’m also curious if the cable companies are seeing enough bottom-line improvement to justify the expensive upgrades to DOCSIS 3.1. Aside from Comcast and Charter I wonder how companies like Cox, Mediacom and Cable ONE justify the upgrade costs. While those companies are seeing modest growth in broadband customers, each is also losing cable customers, and I’d love to understand if the upgrades are cost-justified.

If there is any one takeaway from these statistics it’s that we still haven’t reached the top of the broadband market. I see articles from time to time that predict that younger households are going to bail on landline broadband in favor of cellular broadband. But seeing that over 2.4 million households added broadband in the last year seems to be telling a different story.

The Four Internets

Kieron O’Hara and Wendy Hall authored a paper for the Centre for International Governance Innovation (CIGI) looking at Internet data flow from country to country. CIGI is a non-partisan think tank that looks at issues related to multilateral international governance. They explore policies in many areas that are aimed at finding ways to improve international trade and the exchange of ideas. The group was founded by donations from the founders of Blackberry and Rim, matched by the Canadian government.

O’Hara and Hall argue that there are four separate models of the Internet operating in the world today. These models differ in the way they view the use of data:

Silicon Valley Open Internet. This is the original vision for the Internet where there should be a free flow of data with little or no restrictions on how data can be used. This model favors Tor and platforms that allow people to privately exchange data without being monitored.

Beijing Paternal Internet. Often also referred to as the Great Firewall of China, the Chinese Internet closely monitors internet usage. Huge armies of censors monitor emails, social media and website to search for any behavior not sanctioned by the state. The Chinese government blocks foreign apps and web platforms that won’t adhere to their standards. Other authoritarian countries have their own walled-off version of the Beijing model.

Brussels Bourgeois Internet. The European Internet favors the open nature of the Silicon Valley Internet, but then heavily regulates Internet behavior to protect privacy and to try to restrict what it considers to be bad behavior on the Internet. This is the Internet that values people over web companies.

Washington DC Commercial Internet. Washington DC views the Internet as just another market and fosters hands-off policies that essentially equate to zero regulation of the big web players, favoring profitability over privacy and people.

The authors say there may be a fifth internet model emerging, which is the Russian model where the government actively uses the Internet for propaganda purposes.

These various models of the Internet matter as world commerce continues to move online and we have growing volumes of  data exchanged between countries. We are now growing to a point where the different models are conflicting. Simple things, like the nature of the personal data that can be recorded and exchanged with an ecommerce transaction are now different around the world.

We are already seeing big differences arise for how countries treat their own data. For example, the Chinese, Russians, and Indians are insisting that data of all kinds created within the country should be stored in servers within the country and not easily be shared outside. That kind of restriction equates to the creation of international boundaries for the exchange of data. This is likely to grow over time and result in international commerce flowing through some version of data customs rather than flowing freely.

The paper asks some interesting questions on how we resolve these sorts of issues. For example, could there be some sort of international global data space for e-commerce, which would be treated differently than the exchange of other kinds of data?

The issues highlighted in the paper are real ones that are likely to start making news over the next few years. For example, the Chinese shopping site Alibaba is poised to offer a serious challenge to Amazon in the US. Considering the concern in the US of espionage by Chinese firms like Huawei, will the US somehow restrict a Chinese firm from conducting e-commerce within the US (and gathering data on US citizens)? Multiply that one example by hundreds of similar concerns that exist between countries and it’s not hard to picture a major splintering of the international Internet.

Gaming Migrates to the Cloud

We are about to see a new surge in demand for broadband as major players in the game industry have decided to move gaming to the cloud. At the recent Game Developer’s Conference in San Francisco both Google and Microsoft announce major new cloud-based gaming initiatives.

Google announced Stadia, a platform that they tout as being able to play games from anywhere with a broadband connection on any device. During the announcement they showed transferring a live streaming game from desktop to laptop to cellphone. Microsoft announced the new xCloud platform that let’s Xbox gamers play a game from any connected device. Sony Playstation has been promoting online play between gamers from many years and now also offers some cloud gaming on the Playstation Now platform.

OnLive tried this in 2011, offering a platform that was played in the cloud using OnLive controllers, but without needing a computer. The company failed due to the quality of broadband connections in 2011, but also due to limitations at the gaming data centers. Both Google and Microsoft now operate regional data centers around the country that house state-of-the-art whitebox routers and switches that are capable of handling large volumes of simultaneous gaming sessions. As those companies have moved large commercial users to the cloud they created the capability to also handle gaming.

The gaming world was ripe for this innovation. Current gaming ties gamers to gaming consoles or expensive gaming computers. Cloud gaming brings mobility to gamers, but also eliminates need to buy expensive gaming consoles. This move to the cloud probably signals the beginning of the end for the Xbox, Playstation, and Nintendo consoles.

Google says it will support some games at the equivalent of an HD video stream, at 1080p and 60 frames per second. That equates to about 3GB of downloaded per hour. But most of the Google platform is going to operate at 4K video speeds, requiring download speeds of at least 25 Mbps per gaming stream and using 7.2 GB of data per hour. Nvidia has been telling gamers that they need 50 Mbps per 4K gaming connection.

This shift has huge implications for broadband networks. First, streaming causes the most stress on local broadband networks since the usage is continuous over long periods of times. A lot of ISP networks are going to start showing data bottlenecks when significant numbers of additional users stream 4K connections for hours on end. Until ISPs react to this shift, we might return to those times when broadband networks bogged down in prime time.

This is also going to increase the need for download and upload speeds. Households won’t be happy with a connection that can’t stream 4K, so they aren’t going to be satisfied with a 25 Mbps connection that the FCC says is broadband. I have a friend with two teenage sons that both run two simultaneous game streams while watching a steaming gaming TV site. It’s good that he is able to buy a gigabit connection on Verizon FiOS, because his sons alone are using a continuous broadband connection of at least 110 Mbps, and probably more

We are also going to see more people looking at the latency on networks. The conventional wisdom is that a gamer with the fastest connection has an edge. Gamers value fiber over cable modems and value cable modems over DSL.

This also is going to bring new discussion to the topic of data caps. Gaming industry statistics say that the average serious gamer averages 16 hours per week of gaming. Obviously, many play longer than the average. My friend with the two teenagers is probably looking at least at 30 GB per hour of broadband download usage plus a decent chunk of upload usage. Luckily for my friend, Verizon FiOS has no data cap. Many other big ISPs like Comcast start charging for data usage over one terabyte per month – a number that won’t be hard to reach for a household with gamers.

I think this also opens up the possibility for ISPs to sell gamer-only connections. These connections could be routed straight to peering arrangements with the Google or Microsoft to guarantee the fastest connection through their network and wouldn’t mix gaming streams with other household broadband streams. Many gamers will pay extra to have a speed edge.

This is just another example of how the world find ways to use broadband when it’s available. We’ve obviously reached a time when online gaming can be supported. When OnLive tried is there were not enough households with fast enough connections, there weren’t fast enough regional data centers, and there wasn’t a peering network in place where ISPs connect directly to big data companies like Google and bypass the open Internet.

The gaming industry is going to keep demanding faster broadband and I doubt they’ll be satisfied until we have a holodeck in every gamer’s home. But numerous other industries are finding ways to use our increasing household broadband capcity and the overall demand keeps growing at a torrid pace.