Hidden Fees Adding Up

Consumer Reports recently published a special report titled “What’s the Fee?: How Cable Companies Use Hidden Fees to Raise Prices and Disguise the True Cost of Service”. Cable companies have advertised prices for many years that are significantly lower than the actual bills customers see – but the CR report shows that the size of the fees has grown significantly over the last few years.

The report lists several specific examples. For example, the broadcast fee and the regional sports fees at Comcast increased from $2.50 in 2015 to $18.25 currently. The broadcast fee supposedly covers the cost of buying local network channels – ABC, CBS, FOX, and NBC. The regional sports fee can cover the cost of channels carrying regional college and pro sports. In both cases, the cable companies never disclose the actual fees they pay that are covered by these fees.

The report shows that Charter increased its broadcast fee three times in the last year, starting at $8.85 in October 2019 to reach $13.50 per month in October 2019.

It’s not hard to understand why customers are confused by the many fees. The report points out that some cable bills have more than a dozen line items, which are a mix of rates for products, external taxes and fees, and these various ‘hidden’ fees – meaning they are usually not disclosed when advertising the products.

In addition to the Broadcast TV fee and the Regional sports fees the report lists the following other fees:

  • Settop box rental fee. This is to recover the cost of the settop box hardware. For many years this fee was around $5 monthly for most cable providers, but this is an area that has also seen big price increases in recent years and the highest rate I’ve seen was $12 per month. This is to recover a settop box, which for small ISPs costs a little over $100, and must cost less for the big cable companies.
  • Cable Modem / WiFi Router. This is the fee with perhaps the biggest range of pricing – some ISPs don’t charge for this while others are charging more than $10 per month.
  • HD Technology Fee. This fee used to be charged by almost every cable company back when they started offering HD channels (a decade ago many channels were offered in both an HD and an analog format). Now that the whole industry has largely gone to digital programming, CR reports the only company still charging this fee is Comcast.
  • Internet Service Fees. This is a relatively new fee that gets billed to anybody buying Internet Access. The report highlights the fees charged by RCN and Frontier.
  • Administrative and Other Fees. These are often fees under various names that don’t cover any specific costs. However, some fees are specific – I just read an article describing a $7 fee to business customers by AT&T in California to recover property taxes.

Consumer Reports collected a number of sample bills from customers and reports that the average monthly company-imposed fees for the bills they analyzed averaged to $22.96 for AT&T U-verse, $31.28 for Charter, $39.59 for Comcast, $40.16 for Cox, and $43.79 for Verizon FiOS. They estimate that these fees could total to at least $28 billion per year nationwide.

To be fair to the cable providers, these fees are not all profits. The companies pay out substantial retransmission fees for local content and pay a lot for sports programming. However, some of the fees like settop box and modem rentals are highly profitable, generating revenues far above the cost of the hardware. Some of the fees like administrative fees are 100% margin for the companies.

Consumer Reports advocates for legislation that would force cable companies and ISPs to fully disclose everything on bills, similar to what happened with the airline industry in 2011 with the Full Fare Advertising Rule. CR believes that the FCC has the authority to require such transparency without legislation.

Improving Rural Wireless Broadband

Microsoft has been implementing rural wireless broadband using white space spectrum – the slices of spectrum that sits between traditional TV channels. The company announced a partnership with ARK Multicasting to introduce a technology that will boost the efficiency of fixed wireless networks.

ARK Multicasting does just what their name implies. Today about 80% of home broadband usage is for video, and ISPs unicast video, meaning that the send a separate stream of a given video to each customer that wants to watch it. If ten customers in a wireless node are watching the same new Netflix show, the ISP sends out ten copies of the program. Today, in even a small wireless node of a few hundred customers an ISP might be transmitting dozens of simultaneous copies of the most popular content in an evening. The ARK Multicasting technology will send out just one copy of the most popular content on the various OTT services like Netflix, Amazon Prime, and Apple TV. This one copy will be cached in an end user storage device, and if a customer elects to watch the new content they view it from the local cache.

The net impact of multicasting should be a huge decrease in demand for video content during peak network hours. It would be interesting to know the percentage of video viewing in a given week comes from watching newly released content. I’m sure all of the OTT providers know that number, but I’ve never seen anybody talk about it. If anybody knows that statistic, please post in reply comments to this blog. Anecdotal evidence suggests the percentage is significant because people widely discuss new content on social media soon after it’s released.

The first trial of the technology is being done in conjunction with a Microsoft partner wireless network in Crockett. Texas. ARK Multicasting says that it is capable of transmitting 7-10 terabytes of content per month, which equates to 2,300 – 3,300 hours of HD video. We’ll have to wait to see the details of the deployment, but I assume that Microsoft will provide the hefty CPE capable of multi-terabyte storage – there are no current consumer settop boxes with that much capacity. I also assume that cellphones and tablets will grab content using WiFi from the in-home storage device since there are no tablets or cellphones with terabyte storage capacity.

To be effective ARK must be deleting older programming to make room for new, meaning that the available local cache will always contain the latest and most popular content on the various OTT platforms.

There is an interesting side benefit of the technology. Viewers should be able to watch cached content even if they lose the connection to the ISP. Even after a big network outage due to a storm, ISP customers should still be able to watch many hours of popular content.

This is a smart idea. The weakest part of the network for many fixed wireless systems is the backhaul connection. When a backhaul connection gets stressed during the busiest hours of network usage all customers on a wireless node suffer from dropped packets, pixelization, and overall degraded service. Smart caching will remove huge amounts of repetitive video signals from the backhaul routes.

Layering this caching system onto any wireless system should free up peak evening network resources for other purposes. Fixed wireless systems are like most other broadband technologies where the bandwidth is shared between users of a given node. Anything that removes a lot of video downloading at peak times will benefit all users of a node.

The big OTT providers already do edge-caching of content. Providers like Netflix, Google, and Amazon park servers at or near to ISPs to send local copies of the latest content. That caching saves a lot of bandwidth on the internet transport network. The ARK Multicasting will carry caching down to the customer level and bring the benefits of caching to the last-mile network.

A lot of questions come to mind about the nuances of the technology. Hopefully the downloads are done in the slow hours of the network so as to not to add to network congestion. Will all popular content be sent to all customers – or just content from the services they subscribe to? The technology isn’t going to work for an ISP with data caps because the cashing means customers might be downloading multiple terabytes of data that may never be viewed.

I assume that if this technology works well that ISPs of all kinds will consider it. One interesting aspect of the concept is that this means getting ISPs back into the business of supplying boxes to customers – something that many ISPs avoid as much as possible. However, if it works as described, this caching could create a huge boost to last-mile networks by relieving a lot of repetitive traffic, particularly at peak evening hours. I remember local caching being tried a decade or more ago, but it never worked as promised. It will be interesting to see if Microsoft and ARK can pull this off.

A New Technology for MDU Broadband

A Canadian company recently announced a new device that promises the ability to deliver gigabit speeds inside of MDUs using existing copper or coaxial wiring. The company is Positron Access Solutions and I talked to their CTO and president, Pierre Trudeau at the recent Broadband Communities event in Washington DC. Attached is an article and a PowerPoint talking about the new technology.

The technology is built upon a framework of the G.hn standards. You might remember this as the standard supporting powerline carrier that was used before WiFi to distribute broadband around the home using the electrical wiring in the home. G.hn over powerline was a sufficient technology when broadband speeds were slow but didn’t scale up to support faster broadband speeds. In thinking back, I recall that the biggest limitation was that there are dozens of different types of electrical wires used in homes over the last century and it was hard to have a technology that worked as promised over the various sizes and types of in-home wiring.

Positron has been around for many years and manufactures IP PBX systems and DSL extenders. They are referring to the new technology as GAM, which I take to mean G.hn Access Network.

The company says that the technology will deliver a gigabit signal about 500 feet over telephone copper wires and over 4,000 feet on coaxial cable. Large MDUs delivering the technology using telephone copper might require spacing a few devices throughout parts of the network.

The technology operates on unused frequency bands on the copper cables. For example, on telephone copper, the technology can coexist on a telephone wire that’s already carrying telephone company voice. On coaxial cable, the Positron device can coexist with satellite TV from DirecTV or Dish Networks but can’t coexist with a signal from a traditional cable company.

Positron says they are a natural successor to G.Fast which has never gotten a lot of traction in the US. Positron says they can deliver more bandwidth with less noise than G.Fast. The Positron GAM spits out Ethernet at the customer apartment unit and can be used with any existing CPE like WiFi routers, computers, TVs, etc.

This is a new technology and the company currently has only a few test units at clients in the field. Like all new technology, a company should consider this as a beta technology where the vendor will be working out field issues. But this technology has a lot of promise if perfected. There are a lot of older MDUs where the cost of rewiring is prohibitive or where the building owners don’t want fiber strung through hallways. Getting to apartment units through existing copper wiring should be less disruptive, less expensive and faster to market.

I always caution all of my clients about using first-generation technology. It’s bound to suffer from issues that aren’t discovered until deployed in real-world situations. First-generation equipment is always a risk since many vendors have abandoned product lines that have too many field problems. The supply chain is often poorly defined, although in the case of Positron the company has been providing technical support for many years. My main concern with beta technology is that it’s never comfortable using end-user customers as guinea pigs.

However, an MDU might be the perfect environment to try new technology. Many MDUs have been unable to attract better broadband due to high rewiring costs and might be willing to work with an ISP to test new technology. If this technology operates as touted it could provide a cost-effective way to get broadband into MDUs, particularly older ones where rewiring is a cost barrier.

Is OTT Service Effective Competition for Cable TV?

The FCC made an interesting ruling recently that signals the end of regulation of basic cable TV. Charter Communications had petitioned the FCC for properties in Massachusetts claiming that the properties have ‘effective competition’ for cable TV due to competition from OTT providers – in this case, due to AT&T DirecTV Now, a service that offers a full range of local and traditional cable channels.

The term effective communications is a very specific regulatory term and once a market reaches that status a cable company can change rates at will for basic cable. – the tiers that include local network stations.

The FCC agreed with Charter and said that the markets are competitive and granted Charter the deregulated status. This designation in the past has been granted in markets that have a high concentration of satellite TV or else that have a lot of alternative TV offered by a fiber or DSL overbuilder that has gained a significant share of the market.

In making this ruling the FCC effectively deregulated cable everywhere since there is no market today that doesn’t have a substantial amount of OTT content competing with cable companies. Cable providers will still have to go through the process of asking to deregulate specific markets, but it’s hard to think that after this ruling that the FCC can say no to any other petition.

From a regulatory perspective, this is probably the right ruling. Traditional cable is getting clobbered and it looks like the industry as a whole might lose 5-6 full percentage of market share this year and end up under a 65% national penetration rate. While we are in only the third year where cord cutting became a measurable trend, the cable industry customer losses are nearly identical to the market losses for landline telephone at the peak of that market decline.

There are two consequences for consumers in a market that is declared to be effectively competitive. First, it frees cable companies from the last vestiges of basic cable rate regulation. This is not a huge benefit because cable companies have been free for years to raise rates in higher tiers of service. In a competitive market, a cable provider is also no longer required to carry local network channels in the basic tier – although very few cable systems have elected this option.

I’ve seen several articles discussing this ruling that assume that this will result in an instant rate increase in these markets – and they might be right. It’s a headscratcher watching cable companies raising rates lately when higher rates are driving households to become cord cutters. But cable executives don’t seem to be able to resist the ability to raise rates, and each time they do, the overall revenue of a cable system increases locally, even with customer defections.

It’s possible that this ruling represents nothing more than the current FCC’s desire to deregulate as many things as possible. One interesting aspect of this ruling is that the FCC has never declared OTT services like SlingTV or DirecTV Now to be MVDPs (multichannel video program distributors) – a ruling that would pull these services into the cable TV regulatory regime. From a purely regulatory viewpoint, it’s hard to see how a non-MVDP service can meet the technical requirements of effective competition. However, from a practical perspective, it’s not hard to perceive the competition.

Interestingly, customers are not leaving traditional cable TV and flocking to the OTT services that emulate regular cable TV service. Those services have recently grown to become expensive and most households seem to be happy cobbling together packages of content from OTT providers like Netflix and Amazon Prime that don’t carry a full range of traditional channels. From that market perspective, one has to wonder how much of a competitor DirecTV Now was in the specific markets, or even how Charter was able to quantify the level of competition from a specific OTT service.

Buying Big Telco Properties

Over the years I have helped several clients buy telephone properties from the big telcos. This stretches back to over 30 years ago when US West sold off some rural exchanges in the Dakotas. Over the years I’ve had some role in other transactions where bigger telcos sold off an exchange or group of exchanges around the country.

I’m writing about this today because this topic is coming up again after a hiatus of a few years when there weren’t many such transactions. A few of the big telcos have quietly put parts of their historic footprint up for sale, or they are now willing to talk about selling. The following are a few issues that anybody thinking about buying a big telco company property should consider.

Condition of Network Assets. The assets from big telcos are in sad shape. Big telco copper networks are ancient. I remember helping two different parties examine the possibility of buying a large Verizon property twenty years ago and the copper networks were already in bad shape then. The big telcos began ignoring copper networks soon after divestiture in 1984 and have completely abandoned maintenance in the last few decades. Some rural telco properties have a significant amount of backbone fiber, but even much of that is getting old. Some properties have seen a recent flurry of new fiber due to the CAF II program, but this is generally not extensive. All of the other assets like buildings, huts, cabinets, and vehicles are likely to be old and tired.

Staffing Will Be Sporadic. Often when somebody buys a smaller telco, they have a chance to pick up a qualified staff. This is important because inheriting a staff means inhering institutional memory. The employees know the customers and know the nuances of the network.

You don’t generally get that when buying big telco properties. Generally, such purchases only come with maybe a few outside technicians. Big telcos still largely perform many telco functions remotely – customer service reps will be done in distant call centers. You’ll get nobody who knows about provisioning, billing, pricing, or anything related to backoffice. The outside technicians available in the purchase are often older and near to retirement age. They are going to be unionized, which might cause them to not take a job with a buyer who’s not unionized.

Records Will Be Dreadful. The big telco will promise to give you all of the records you need to operate the business, but this will turn out to largely be a fantasy. It will turn out that many maps were never created, that customer service records are wrong, that the records showing the facilities used to serve each customer are incomplete or full of errors. This is the primary reason why customers complain about a new buyer of a big telco property because, without the records, a buyer will struggle for 6 to 8 months to figure out the business. I worked with one buyer who was still discovering unbilled circuits many years after a purchase.

Transition Costs Will be High. For the various reasons listed above, the costs to transition from the old telco to new systems and employees will be higher than expected. You’ll find yourself throwing money at trying to straighten out things like billing.

There Will be Surprises. Regardless of the preparation effort for a transition, buying a big telco property will mean some big ugly surprises. Maybe 911 circuits will go dead. Perhaps there will be no SS7 connection. Maybe emails will crash. There will be database issues and some customers will be unable to make or receive calls. Expect the first two months after the purchase to be putting out fires and dealing with irate customers.

One thing will not be a surprise. With a month or two, the public will decide that the new provider is no better than the old big telco. The telcos have already been bleeding customers and a new buyer is likely to lose up to 10% of the customer base in the first year.

Why Do You Want to Buy? I challenge any potential buyer to answer this question. It’s almost unimaginable to consider buying a big telco property without plans for upgrading or overbuilding it. The question to ask is if it’s not better to just selectively overbuild rather than buying a property and then paying again to overbuild. The math may favor buying first and then overbuilding due to the existing revenue stream. But there is also a good chance that with honest math that recognizes the reality of the transition that buying is a terrible decision. Going straight to overbuilding avoids many regulatory burdens such as being the carrier of last resort and avoids serving remote customers who are too expensive to upgrade. Overbuilders are loved by the public – something a buyer of a big telco property can only dream about.

The Future of Coaxial Networks

My blog devotes a lot of time looking at fiber deployment, but since the majority of people in the US get broadband from cable companies using hybrid fiber/coaxial (HFC) technology, today’s blog looks at the next generation of changes planned for HFC.

DOCSIS 4.0. The current generation of HFC technology is DOCSIS 3.1 This technology uses 1.2 GHz of spectrum over coaxial cable. DOCSIS 3.1 has several competitive drawbacks compared to fiber. First, while the technology can deliver gigabit download speeds to customers, the dirty secret of the industry is that gigabit speeds can only be given to a limited number of customers. With current node sizes, cable companies can’t support very many large data users without sacrificing the performance of everybody in a node. This is why you don’t see cable companies pricing gigabit broadband at competitive prices or pushing it very hard.

The other big drawback is that upload speeds on DOCSIS 3.1 are set by specification to be no more than one-eighth of the total bandwidth on the system. Most cable companies don’t even allocate that much to upload speeds.

The primary upgrade with DOCSIS 4.0 will be to increase system bandwidth to 3 GHz. That supplies enough additional bandwidth to provide symmetrical gigabit service or else offer products that are faster than 1 Gbps download. It would also allow a cable company to support a lot more gigabit customers.

The big drawback to the upgrade is that many older coaxial cables won’t be able to handle that much bandwidth and will have to be replaced. Further, upgrading to 3 GHz is going to mean replacing or upgrading power taps, repeaters, and other field hardware in the coaxial network. CableLabs is talking about finalizing the DOCSIS 4.0 specification by the end of 2020. None of the big cable companies have said if and when they might embrace this upgrade. It seems likely that many of the bigger cable companies are in no hurry to make this upgrade.

Low Latency DOCSIS (LLD). Another drawback of HFC networks is that they don’t have the super-low latency needed to support applications like intense gaming or high-quality video chat. The solution is a new encoding scheme being called low latency DOCSIS (LLD).

The LLD solution doesn’t change the overall latency of the cable network but instead prioritizes low-latency applications. The result is to increase the latency for other applications like web-browsing and video streaming.

This can be done because most of the latency on an HFC network comes from the encoding schemes used to layer broadband on top of cable TV signals. The encoding schemes on coaxial cable networks are far more complex than fiber encoding. There are characteristics of copper wires that cause natural interference within a transmission path. A coaxial encoding scheme must account for attenuation (loss of signal over distance), noise (the interference that appears from external sources since copper acts as a natural antenna), and jitter (the fact that interference is not linear and comes and goes in bursts). Most of the latency on a coaxial network comes from the encoding schemes that deal with these conflicting characteristics. The LLD solution bypasses traditional encoding for the handful of applications that need low latency.

Virtual CMTS. One of the more recent improvements in coaxial technology was distributed access architecture (DAA). This technology allows for disaggregating the CMTS (the router used to provide customer broadband) from core routing functions, meaning that the CMTS no longer has to sit at the core of the network. The easiest analogy to understand DAA is to consider modern DSLAM routers. Telephone companies can install a DSLAM at the core of the network, but they can instead put the DSLAM at the entrance to a subdivision to get it closer to customers. DAA allowed cable companies to make this same change.

With virtual CMTS a cable network takes DAA a step further. In a virtual CMTS environment, the cable company might perform some of the CMTS functions in remote data centers in the cloud. There will still be a piece of electronics where the CMTS used to sit, but many of the computing functions can be done remotely.

A cloud-based CMTS offers some advantages to the cable operator:

  • Allows for customizing portions of a network. The data functions provided to a business district can be different from what is supplied to a nearby residential neighborhood. Customization can even be carried down to the customer level for large business customers.
  • Allows for the use of cheap off-the-shelf hardware, similar to what’s been done in the data centers used by the big data complies like Google and Facebook. CMTS hardware has always been expensive because it’s been made by only a few vendors.
  • Improves operations by saving on local resources like local power, floor/rack space, and cooling by moving heavy computing functions to data centers.

Summary. There is a lot of discussion within the cable industry asking how far cable companies want to push HFC technology. Every CEO of the major cable companies has said that their eventual future is fiber, and the above changes, which each bring HFC closer to fiber performance, are still not as good as fiber. Some Wall Street analysts have predicted that cable companies won’t embrace bandwidth upgrades for a while since they already have the marketing advantage of being able to claim gigabit speeds. The question is if the cable companies are willing to make the expensive investment to functionally come closer to fiber performance or if they are happy to just claim to be equivalent to fiber performance.

Do Cable Companies Have a Wireless Advantage?

The big wireless companies have been wrangling for years with the issues associated with placing small cells on poles. Even with new FCC rules in their favor, they are still getting a lot of resistance from communities. Maybe the future of urban/suburban wireless lies with the big cable companies. Cable companies have a few major cost advantages over the wireless companies including the ability to bypass the pole issue.

The first advantage is the ability to deploy mid-span cellular small cells. These are cylindrical devices that can be placed along the coaxial cable between poles. I could not find a picture of these devices and the picture accompanying this article is of a strand-mounted fiber splice box – but it’s s good analogy since the size and shape of the strand-mounted small cell device is approximately the same size and shape.

Strand-mounted small cells provide a cable company with a huge advantage. First, they don’t need to go through the hassle of getting access to poles and they avoid paying the annual fees to rent space on poles. They also avoid the issue of fiber backhaul since each unit can get broadband using a DOCSIS 3.1 modem connection. The cellular companies don’t talk about backhaul a lot when they discuss small cells, but since they don’t own fiber everywhere, they will be paying a lot of money to other parties to transport broadband to the many small cells they are deploying.

The cable companies also benefit because they could quickly deploy small cells anywhere they have coaxial cable on poles. In the future when wireless networks might need to be very dense the cable companies could deploy a small cell between every pair of poles. If the revenue benefits of providing small cells is great enough, this could even prompt the cable companies to expand the coaxial network to nearby neighborhoods that might not otherwise meet their density tests, which for most cable companies is to only build where there are at least 15 to 20 potential customers per linear mile of cable.

The cable companies have another advantage over the cellular carriers in that they have already deployed a vast WiFi network comprised of customer WiFi modems. Comcast claims to have 19 million WiFi hotspots. Charter has a much smaller 500,000 hotspots but could expand that count quickly if needed. Altice is reportedly investing in WiFi hotspots as well. The big advantage of WiFi hotspots is that the broadband capacity of the hotspots can be tapped to act as landline backhaul for cellular data and even voice calls.

The biggest cable companies are already benefitting from WiFi backhaul today. Comcast just reported to investors that they added 204,000 wireless customers in the third quarter of 2019 and now have almost 1.8 million wireless customers. Charter is newer to the wireless business and added 276,000 wireless customers in the third quarter and now has almost 800,000 wireless customers.

Both companies are buying wholesale cellular capacity from Verizon under an MVNO contract. Any cellular minute or cellular data they can backhaul with WiFi doesn’t have to be purchased from Verizon. If the companies build small cells, they would further free themselves from the MVNO arrangement – another cost savings.

A final advantage for the cable companies is that they are deploying small cell networks where they already have a workforce to maintain the network. Bother AT&T and Verizon have laid off huge numbers of workers over the last few years and no longer have the fleets of technicians in all of the markets where they need to deploy cellular networks. These companies are faced with adding technicians where their network is expanding from a few big-tower cell sites to vast networks of small cells.

The cable companies don’t have nearly as much spectrum as they wireless companies, but they might not need it. The cable companies will likely buy spectrum in the upcoming CBRS auction and the other mid-range spectrum auctions over the next few years. They can use the 80 MHz of free CBRS spectrum that’s available everywhere.

These advantages equate to a big cost advantage for the cable companies. They save on speed to market and avoid paying for pole-mounted small cells. Their networks can provide the needed backhaul for practically free. They can offload a lot of cellular data through the customer WiFi hotspots. And the cable companies already have a staff to maintain the small cell sites. At least in the places that have aerial coaxial networks, the cellular companies should have higher margins than the cellular companies and should be formidable competitors.

Starlink Making a Space Grab

SpaceNews recently reported that Elon Musk and his low-orbit space venture Starlink have filed with the International Telecommunications Union (ITU) to launch an additional 30,000 broadband satellites in addition to the 11,927 now in the planning stages. This looks like a land grab and Musk is hoping to grab valuable orbital satellite paths to keep them away from competitors.

The new requests consist of 20 filings requesting to deploy 1,500 satellites each in 20 different orbital bands around the earth. These filings are laying down the gauntlet for other planned satellite providers like OneWeb that has plans for 1,910 satellites, Kuiper (Jeff Bezos) with plans for 3,326 satellites and Samsung with plans for 4,600 satellites.

The Starlink announcements are likely aimed at stirring up regulators at the ITU, which is meeting at the end of this month to discuss spectrum regulations. The FCC has taken the lead in developing satellite regulations. Earlier this year the FCC established a rule where an operator must deploy satellites on a timely basis to keep the exclusive right of the spectrum needed to communicate with the satellites. Under the current FCC rules, a given deployment must be 50% deployed within six years and completely deployed within nine years. In September, Spacelink revised its launch plans with the FCC in a way that meets the new FCC guidelines, as follows:

Satellites Altitude (Km) 50% Completion 100% Completion
Phase 1 1,584 550 March 2024 March 2027
1,600 1,110
400 1,130
375 1,275
450 1,325
Phase 2 2,493 336 Nov 2024 Nov 2027
2,478 341
2,547 346
11,927

This is an incredibly aggressive schedule and would require the company to launch 5,902 satellites by November 24, 2024, or 120 satellites per month beginning in November 2019. To date, the company has launched 62 satellites. The company would then need to step launches up to 166 per month to complete the second half on time.

I’m guessing that Starlink is already starting to play the regulatory game. For example, if they can’t meet the launch dates over the US in that time frame, then some of the constellations might not work in the US. If the company eventually launches all of the satellites it has announced, then every satellite would not need to serve customers everywhere. If the ITU adopts a timeline similar to the US, then it’s likely that other countries won’t award spectrum to every one of the Starlink constellations. Starlink will be happy if each country gives it enough spectrum to be effective there. Starlink’s strategy might be to flood the sky with so many satellites that they can provide service anywhere as long as at least a few of their constellations are awarded spectrum in each country. There are likely to be countries like North Korea, and perhaps China that won’t allow any connections with satellite constellations that bypass their web firewalls.

Starlink faces an additional challenge with many of the planned launches. Any satellite with an orbit at less than 340 kilometers (211 miles) is considered as very low earth orbit (VLEO) since there is still enough earth atmosphere at that altitude to cause drag that eventually degrades a satellite orbit. Anything deployed at VLEO heights will have a shorter than normal life. The company has not explained how it plans to maintain satellites at the VLEO altitudes.

At this early stage of satellite deployment, there is no way to know if Starlink is at all serious about wanting to launch 42,000 satellites. This may just be a strategy to get more favorable regulatory rules. If Starlink is serious about this, you can expect other providers to speed up plans to avoid being locked out of orbital paths. We’re about to see an interesting space race.

Nielsen’s Law of Internet Bandwidth

One of the more interesting rules-of-thumb in the industry is Nielsen’s Law of Internet bandwidth, which states that:

  • A high-end user’s connection speed grows by 50% per year.

This ‘law’ was postulated by Jakob Nielsen of the Nielsen Norman Group in 1998 and subsequently updated in 2008 and 2019. Nielsen started by looking at usage for himself and other big data users, going back to a 300 bps (bits per second) modem used in 1984. In 1998 Nielsen had measured growth at 53% annually and rounded to 50%. In the ten years from 1998 to 2008, he had measured growth to be 49% annually. At least for himself and other big data users, this ‘law’ has held steady for 36 years.

While this is not really a law, but rather an interesting observation, it’s something that all ISPs should notice. In my time in the industry, I’ve seen the bandwidth use of the largest users grow at a faster pace than everybody else. This is something every network engineer ought to keep in mind when designing networks.

Consider bandwidth at schools. I recall seeing some schools get gigabit connections a decade ago. When first installed the gigabit connections seemed to be oversized and schools wondered at the time if they needed that much bandwidth. But since then they’ve figured it out and many schools have grown past a gigabit connection and want a lot more. School networks that were thrilled to find an ISP that could provide a gigabit of bandwidth are now looking to build private fiber networks as the most affordable solution for satisfying the bigger bandwidth needs they see coming in future years.

We’ve seen the same thing at hospitals, factories, and other large businesses that have embraced the cloud. Businesses subscribe to large data pipes and then outgrow them in only a few years.

Network engineers are generally cautious people because they have to balance capital budgets against future broadband demand. Given an unlimited budget, many network designers would oversize data pipes, but they don’t like to be accused of wasting money. I can’t even count the number of times I’ve heard from network engineers who thought they were designing a network ready for the next decade only to find it full in half that time.

Nielsen points out a statistic that most of us have a hard time grasping. A network experiencing 50% annual growth will use over 57 times more data a decade from now than used today. He compares the growth of bandwidth to Moore’s law that says that computer chip capacity doubles every 18 months. That works out to mean over 100 times from capacity after a decade of growth.

The only place we see this kind of rampant growth for entire networks today is urban cell sites where data usage is doubling every two years. That’s a startling growth rate when you think of it in real terms. A cellular carrier that finds a way to double the capacity of an urban cellular network will see that new capacity gobbled up within two years.

It’s not hard to understand why the cellular industry is in a panic and is looking at every way possible to expand capacity. Interestingly the industry elected to hide their concern about growth behind the story that we need to do everything possible to enable 5G. I guess it’s hard for the cellular industry to expose their vulnerabilities by instead just telling the public that they need to greatly expand cellular capacity. This need for capacity is why they are building small cell sites, buying more spectrum and pushing their labs to finish the development of 5G – they need all three of those things just to keep up with growing demand.

All network owners need to acknowledge that there are parts of their network where the demand is growing faster than average, and any network updates should make certain that the largest customers get the future capacity they are sure to need.

Mapping Cellular Data Speeds

AT&T recently filed comments in Docket 19-195, the docket that is looking to change broadband mapping, outlining the company’s proposal for reporting wireless data speeds to the FCC. I think a few of their recommendations are worth noting.

4G Reporting. Both AT&T and Verizon support reporting on 4G cellular speeds using a 5 Mbps download and 1 Mbps upload test with a cell edge probability of 90% and a loading of 50%. Let me dissect that recommendation a bit. First, this means that customer has a 90% chance of being able to make a data connection at the defined edge if a cell tower coverage range.

The more interesting reporting requirement is the 50% loading factor. This means the reported coverage area would meet the 5/1 Mbps speed requirement only when a cell site is 50% busy with customer connections. Loading is something you rarely see the cellular companies talk about. Cellular technology is like most other shared bandwidth technologies in that a given cell site shares bandwidth with all users. A cell site that barely meets the 5/1 Mbps data speed threshold when it’s 50% busy is going to deliver significantly slower lower speeds as the cell site gets busier. We’ve all experienced degraded cellular performance at rush hours – the normal peak times for many cell sites. This reporting requirement is a good reminder that cellular data speeds vary during the day according to how many people are using a cell site – something the cellular companies never bother to mention in their many ads talking about their speeds and coverage.

The recommended AT&T maps would show areas that meet the 5/1 Mbps speed threshold, with no requirement to report faster speeds. I find this recommendation surprising because Opensignal reports the average US speeds of 4G LTE across America is as follows:

2017 2018
AT&T 12.9 Mbps 17.87 Mbps
Sprint 9.8 Mbps 13.9 Mbps
T-Mobile 17.5 Mbps 21.1 Mbps
Verizon 14.9 Mbps 20.9 Mbps

I guess that AT&T favors the lowly 5/1 Mbps threshold since that will show the largest possible coverage area for wireless broadband. While many AT&T cell sites provide much faster speeds, my guess is that most faster cell sites are in urban areas and AT&T doesn’t want to provide maps showing faster speeds such as 15 Mbps because that would expose how slow their speeds are in most of the country. If AT&T offered faster speeds in most places, they would be begging to show multiple tiers of cellular broadband speeds.

Unfortunately, maps using the 5/1 Mbps criteria won’t distinguish between urban places with fast 4G LTE and more rural places that barely meet the 5 Mbps threshold – all AT&T data coverage will be homogenized into one big coverage map.

About the only good thing I can say about the new cellular coverage maps is that if the cellular companies report honestly, we’re going to see the lack of rural cellular broadband for the first time.

5G Broadband Coverage. I don’t think anybody will be shocked that AT&T (and the other big cellular companies) don’t want to report 5G. Although they are spending scads of money touting their roll-out of 5G they think it’s too early to tell the public where they have coverage.

AT&T says that requiring 5G reporting at this early stage of the new technology would reveal sensitive information about cell site location. I think customers who pony up extra for 5G want to know where they can use their new expensive handsets.

AT&T wants 5G coverage to fall under the same 5/1 Mbps coverage maps, even though the company is touting vastly faster speeds using new 5G phones.

It’s no industry secret that most of the announced 5G deployment announcements are mostly done for public relations purposes. For example, AT&T is loudly proclaiming the number of major cities that now have 5G, but this filing shows that they don’t want the public to know the small areas that can participate in these early market trials.

If 5G is a reasonable substitute for landline broadband, then the technology should not fall under the cellular reporting requirements. Instead, the cellular carriers should be forced to show where they offer speeds exceeding 10/1 Mbps, 25/3 Mbps and 100/10 Mbps, and 1 Gbps. I’m guessing a 5G map using these criteria would largely show a country that has no 5G coverage – but we’ll never know unless the FCC forces the wireless companies to tell the truth. I think that people should be cautious about speeding extra for 5G-capable phones until the cellular carriers are honest with them about the 5G coverage.