Broadband Usage Continues to Grow

The firm OpenVault, a provider of software that measures data consumption for ISPs reported that the average monthly data use by households grew from 201.6 gigabytes in 2017 to 268.7 gigabytes in 2018 – a growth rate of 33%. The company also reported that the medium use per household grew from 103.6 gigabytes in 2017 to 145.2 gigabytes in 2018 – a growth rate of 40%. The medium represents the midpoint of users, with half of all households above and half below the medium.

To some degree, these statistics are not news because we’ve known for a long time that broadband usage at homes, both in total download and in desired speeds has been doubling every three years since the early 1980s. The growth in 2018 is actually a little faster than that historical average and if the 2018 growth rate was sustained, in three years usage would grow by 235%. What I find most impressive about these new statistics is the magnitude of the annual change – the average home used 67 more gigabytes of data per month in 2018 than the year before – a number that would have seemed unbelievable only a decade ago when the average household used a total of only 25 gigabytes per month.

There are still many in the industry who are surprised by these numbers. I’ve heard people claim that now that homes are watching all the video they want that the rate of growth is bound to slow down – but if anything, the rate of growth seems to be accelerating. We also know that cellular data consumption is also now doubling every two years.

This kind of growth has huge implications for the industry. From a network perspective, this kind of bandwidth usage puts a big strain on networks. Typically the most strained part of a network is the backbones that connect to neighborhood nodes. That’s the primary stress point in many networks, including FTTH networks, and when there isn’t enough bandwidth to a neighborhood then everybody’s bandwidth suffers. Somebody that designed a network ten years ago would never have believed the numbers that OpenVault is reporting and would likely not have designed a network that would still be sufficient today.

One consequence of the bandwidth growth is that it’s got to be driving homes to change to faster service providers when they have the option. A household that might have been happy with a 5 Mbps or 10 Mbps connection a few years ago is likely no longer happy with it. This has to be one of the reasons we are seeing millions of homes each year upgrade from DSL to cable modem each year in metropolitan areas. The kind of usage growth we are seeing today has to be accelerating the death of DSL.

This growth also should be affecting policy. The FCC set the definition of broadband at 25/3 Mbps in January of 2015. If that was a good definition in 2015 then the definition of broadband should have been increased to 63 Mbps in 2019. At the time the FCC set that threshold I thought they were a little generous. In 2014, as the FCC was having this debate, the average home downloaded around 100 gigabytes per month. In 2014 the right definition of broadband was probably more realistically 15 – 20 Mbps and the FCC was obviously a little forward-looking in setting the definition. Even so, the definition of broadband should be increased – if the right definition of broadband in 2014 was 20 Mbps, then today the definition of broadband ought to have been increased to 50 Mbps today.

The current FCC is ignoring these statistics for policy purposes – if they raise the definition of broadband then huge numbers of homes will be classified as not having broadband. The FCC does not want to do that since they are required by Congressional edict to make sure that all homes have broadband. When the FCC set a realistic definition of broadband in 2015 they created a dilemma for themselves. That 2015 definition is already obsolete and if they don’t change it, in a few years it is going to be absurdly ridiculous. One only has to look forward three years from now, when the definition of broadband ought to be 100 Mbps.

These statistics also remind us of the stupidity of handing out federal subsidies to build technologies that deliver less than 100 Mbps. We still have two more years of CAF II construction to upgrade speeds to an anemic 10 Mbps. We are still handing out new subsidies to build networks that can deliver 25/3 Mbps – networks that are obsolete before they are completed.

Network designers will tell you that they try to design networks to satisfy demands at least seven years into the future (which is the average life of many kinds of fiber electronics). If broadband usage keeps doubling every three years, then looking forward seven years to 2026, the average home is going to download 1.7 terabytes per month and will expect download speeds of 318 Mbps. I wonder how many network planners are using that target?

The final implications of this growth are for data caps. Two years ago when Comcast set a terabyte monthly data cap they said that it affected only a few homes – and I’m sure they were right at the time. However, the OpenVault statistics show that 4.12% of homes used a terabyte per month in 2018, almost double from 2.11% in 2017. We’ve now reached that point when the terabyte data cap is going to have teeth, and over the next few years a lot of homes are going to pass that threshold and have to pay a lot more for their broadband. While much of the industry has a hard time believing the growth statistics, I think Comcast knew exactly what they were doing when they established the terabyte cap that seemed so high just a few years ago.

Broadening the USF Funding Base

The funding mechanism to pay for the Universal Service Fund is broken. The USF is funded from fees added to landline telephones, cell phones and on large business data connections that are still billed using telco special access products (T1s and larger circuits). The USF fee has now climbed to an exorbitant month tax of 20% of the portion of those services that are deemed to be Interstate by the FCC. This equates to a monthly fee of between a dollar or more for every landline phone and cellphone (the amount charged varies by carrier).

The funding mechanism made sense when it was originally created. The fee at that time was assessed on landlines and was used to built and strengthen landline service in rural America. When the USF fee was introduced the nationwide penetration rate of landlines in urban America was over 98%, and the reasoning was that those with phone service ought to be charged a small fee to help bring phone service to rural America. The concept behind universal service is that everybody in the country is better off when we’re all connected to the communications network.

However, over time the use of the Universal Service Fund has changed drastically and this money is now the primary mechanism that FCC is using to pay for the expansion of rural broadband. This pot of money was used to fund the original CAF II programs for the big telcos and the A-CAM program for the smaller ones. It’s also the source of the Mobility Fund which is used to expand rural cellular coverage.

Remember the BDAC? That’s the Broadband Deployment Advisory Committees that was created by Chairman Ajit Pai when he first took the reins at the FCC. The BDAC was split into numerous subcommittees that looked at specific topics. Each BDAC subcommittee issued a report of recommendations on their topic, and since then little has been heard from them. But the BDAC subcommittees are still meeting and churning out recommendations.

The BDAC subcommittee tasked with creating a State Model Code has suggested the broadening of the funding for the USF. This is the one committee that is not making recommendations for the FCC but rather suggesting ideas that states ought to consider. The Committee has suggested that states establish a fee, similar to the federal USF fee and use the fee to expand broadband in each state. Many states have already done something similar and have created state Universal Service Funds.

The recommendation further suggests that states tax anybody that benefits from broadband. This would include not just ISPs and customers of ISPs, but also the big users of the web like Netflix, Google, Amazon, Facebook, etc. The reasoning is that those that benefit from broadband ought to help pay to expand broadband to everybody. The BDAC recommended language has been modified a few times because the original language was so broad that almost everybody in the country would be subject to the tax, and we’ve learned over years that taxation language needs to be precise.

This is not the first time that this idea has been floated. There are many who suggested in the past to the FCC that USF funding should be expanded to include broadband customers. Just as telephone customers were charged to fund the expansion of the telephone network it makes sense to tax broadband customers to expand broadband. But this idea has always been shot down because early in the life of the Internet the politicians in DC latched onto the idea of not taxing the Internet. This made sense at the time when we needed to protect the fledgling ISP industry – but that concept is now quaintly obsolete since Internet-related companies are probably collectively the world’s biggest industry and hardly need shielding from taxation.

AT&T is a member of this BDAC subcommittee and strongly supports the idea. However, AT&T’s motivations are suspect since they might be the biggest recipient of state USF funds. We saw AT&T lobbyists hijack the state broadband grant program in California and grab all of the money that would have been used to build real rural broadband in the state. The big carriers have an overly large influence in statehouses due to decades of lobbying, and so there is a concern that they support this idea for their own gain rather than supporting the idea of spreading broadband. We just saw AT&T lobbyists at the federal level sneak in language that makes it hard to use the e-Connectivity grants from competing with them.

But no matter how tainted the motivation of those on the BDAC committee, this is an idea with merit. It’s hard to find politicians anywhere who don’t think we should close the broadband gap. It’s clear that it’s going to take some government support to make this work. Currently, there are a number of state broadband grant programs, but these programs generally rely annually on allocations from the legislature – something that is always used annually as a bargaining chip against other legislative priorities. None of these grant programs have allocated enough money to make a real dent in the broadband shortfalls in their states. If states are going to help solve the broadband gap they need to come up with a lot more money.

Setting up state USF funds with a broad funding base is one way to help solve the rural broadband divide. This needs to be done in such a way that the money is used to build the needed fiber infrastructure that is needed to guarantee broadband for the rest of the century – such funds will be worthless if the money is siphoned instead to the pockets of the big telcos. It makes sense to assess the fees on a wider base, and I can’t see any reasonable objection against charging broadband customers but also charging big broadband-reliant companies like Netflix, Google, Amazon, and Facebook. The first state to try this will get a fight from those companies, but hopefully the idea of equity will win since it’s traffic from these companies that is driving the need for better broadband infrastructure.

Breakthroughs in Light Research

It’s almost too hard to believe, but I’ve heard network engineers suggest that we may soon exhaust the bandwidth capacity our busiest backbone fiber routes, particularly in the northeast. At the rate that our use of data is growing, we will outgrow the total capacity of existing fibers unless we develop faster lasers or build new fiber. The natural inclination is to build more fiber – but at the rate our data is growing, we would consume the capacity of new fibers almost as quickly as they are built. Lately scientists have been working on the problem and there have been a lot of breakthroughs in working with light in ways that can enhance laser communications.

Twisted Light. Dr. Haoran and a team at the RMIT School of Science on Melbourne, Australia have developed a nanophotonic device that lets them read twisted light. Scientists have found ways to bend light into spirals in a state known as orbital angular momentum (OAM). The twisted nature of the light beams presents the opportunity to encode data significantly more data than straight-path laser beams due to the convoluted configuration of the light beam. However, until now nobody has been able to read more than a tiny segment of the twisted light.

The team has developed a nano-detector that that separate the twisted light states into a continuous order, enabling them to both code and decode using a wider range of the OAM light beam. The reliever is made of readily available materials and that should make it inexpensive and scalable for industrial production. The team at RMIT believes with refinement that the detector could bring about more than a 100-times increase in the amount of data that could be carried on one fiber. The nature of the detector also should enable it to receive quantum data from the quickly emerging field of quantum computing.

Laser Bursts Generate Electricity. A team led by Ignacio Franco at the University of Rochester along with a team from the University of Hong King have discovered how to use lasers to generate electricity directly inside chips. They are using a glass thread that is a thousand times thinner than a human hair. If they hit this thread with a short laser burst of one millionth of one billionth of a second they’ve found that for a brief moment the glass acts like a metal and generates an electric current.

One of the biggest limitations on silicon computer chips is moving signal into the chip quickly. With this technique an electrical pulse can be created directly inside of the chip where and when it’s needed, meaning a several magnitude improvement in the speed of getting signals to chip components. The direction and magnitude of the current created can be controlled by varying the shape of the laser beam, by changing its phase. This also could lead to the development of tiny chips operating just above the size of simple molecules.

Infrared Computer Chips. Teams of scientists a the University of Regensburg, in Germany and the University of Michigan have discovered how to use infrared lasers to shift electrons between two states pf angular momentum on a thin sheet of semiconductor material. Flipping between the two electron states creates the classic 1 and 0 needed for computing, at the electron level. Ordinary electrons operate in the gigahertz range, meaning there is a limit of about 1 billion interfaces with electrons possible for a device in a second. Being able to directly change the state of an electron could speed this up as much as a million times.

The scientists think it is possible to build a ‘lightwave’ computer that would have a million-times faster time clock than today’s fastest chips. The next challenge is to develop the train of lasers that can product the desired flips between the two states as needed. This process could also unleash quantum computing. The biggest current drawback of quantum computing is that the qubits – the output of a quantum computation – don’t last very long. A much faster time clock could easily work inside of the quantum time frames.

Breaking the Normal Rules of Light.

Scientists at the National Physics Laboratory in England have developed a technique that changes the fundamental nature of light. Light generally moves through the world as a wave. The scientists created a device they are calling an optical ring resonator. They bend light into continuous rings, and as the light in the rings interact that create unique patters that differ significantly from normal light. The light loses its vertical polarization (the wave peak) and begins moving in ellipses. The scientists hope that by manipulating light they will be able to develop new designs for atomic clocks and quantum computers.

Keeping Up With the Rest of the World

One of my readers sent me an article that announced a fiber-to-the-home expansion in Nepal. The ISP for the country is Vianet Communications P. Ltd, which uses Nokia GPON technology. The company built the first FTTH in the country in 2011 and already serves 10 of the 75 districts. The current expansion will bring fiber to an additional 4 districts, with another district already scheduled after that. Vianet is a commercial ISP and began as a dial-up ISP in the capital of Kathmandu is now expanding across the country with fiber. By the end of the year the ISP will have 200,000 customers on fiber, providing a minimum customer speed of 100 Mbps.

You don’t have to look hard to see similar stories around the world. Romania now has the fastest broadband in Europe and is ranked as having the sixth fastest overall broadband in the world. Romania’s broadband success story is unique since the fiber networks have largely been built by small neighborhood ISPs that have strung up fiber. There are local news articles that joke about the country having fiber-to-the-tree. The country had almost no telecom infrastructure at the end of the cold war and local entrepreneurs and neighborhood groups have tackled the task of bringing the needed fiber infrastructure.

I’ve often heard it said that one of the reasons the rest of the world has more fiber than us is because the governments in those countries build the infrastructure. However, when you look closer at a lot of countries like Nepal and Romania, it’s commercial ISPs that are building fiber, not the government. Singapore has had the fastest broadband in the world for years and their fiber was built by three ISPs. There are similar stories everywhere you look.

If ISPs are able to build fiber in Nepal and Romania, why are they having such a hard time doing so here? There are a few key reasons.

Big ISPs in the US are driven by quarterly earnings expected by Wall Street. They get crucified for not maximizing profits and none of them can undertake any major expansion that would earn infrastructure returns of 7% – 12%. It doesn’t matter that the ISP business is a cash cow and spins off piles of cash once the business is mature – the big ISPs are structured such that they really can’t consider undertaking building fiber to residents.

Years ago Verizon took a hit for tackling FiOS, and even then the company was very disciplined and only built where construction costs were low. People are praising AT&T currently for passing over 10 million homes and businesses with fiber – but their network is the very definition of cherry picking where they serve a few homes here and few homes there, nearby to their existing fiber nodes.

There are plenty of smaller US ISPs that would love to build more fiber, but they have a hard time raising the money. Fifty years ago banks were the primary source of infrastructure lending, but over time for various reasons they no longer want to make the long term loans necessary to support a fiber network.  The big banks are also Wall Street driven, and banks make a significantly higher return on equity by churning shorter-term notes compared to tying up money for 20 – 30 years.

One only has to visit a FISPA convention, the association for fiber overbuilders, to find numerous companies that would gladly tackle more fiber projects if they could borrow the money. Just about every member of FISPA will tell you that borrowing money is their biggest challenge.

The countries building fiber have found ways to overcome these issues. The ISPs there are able to borrow money to expand fiber networks. Their banks love the guaranteed long-term steady returns from broadband. The countries I’ve mentioned have one natural advantages over many parts of the US since they have a higher population density. Nepal has 29 million people and is about the same size as Michigan. Romania is a little smaller than Oregon with a population of 19 million. However, they have other challenges. As you can see from the map accompanying this blog, Nepal has some of the most challenging topography in the world. Both countries are far poorer than the US and yet they are finding ways to get fiber built – because like everywhere, there is a big demand for broadband.

I’ve said many times in this blog that we need government help to build fiber in the rural parts of the country. That’s due simply due to the cost of a fiber network calculated per household, and the numbers don’t work in most rural places. However, I’ve created hundreds of fiber business plans and it generally looks feasible to build fiber in most other places in the country, and yet there is no flood of ISPs building fiber in our towns, cities and suburbs. Detractors of municipal fiber always say that our broadband problems ought to be solved by the private sector – but I look around, and in 95% of America the private sector hasn’t showed up.

The End of Satellite TV?

DirecTV launched their most recent satellite in May of 2015. The company has launched 16 satellites in its history, and with twelve remaining in service is the largest commercial satellite company in the world. AT&T, the owner of DirecTV announced at the end of last year that there would be no more future satellite launches. Satellites don’t last forever, and that announcement marks the beginning of the death of DirecTV. The satellites launched before 2000 are now defunct and the satellites launch after that will start going dark over time.

AT&T is instead going to concentrate of terrestrial cable service delivered over the web. They are now pushing customers to subscribe to DirecTV Now or WatchTV rather than the satellite service. We’ve already seen evidence of this shift and DirecTV was down to 19.6 million customers, having lost a net of 883,000 customers for the first three quarters of 2018. The other satellite company, Dish Networks lost 744,000 customers in the same 9-month period.

DirecTV is still the second largest cable provider, now 2.5 million customers smaller than Comcast, but 3 million customers larger than Charter. It can lose a few million customers per year and still remain as a major cable provider for a long time.

In much of rural America, the two satellite companies are the only TV option for millions of customers. Households without good broadband don’t have the option of going online. I was at a meeting with rural folks last week who were describing their painful attempts to stream even a single SD-quality stream over Netflix.

For many years the satellite providers competed on price and were able to keep prices low since they didn’t have to maintain a landline network and the associated technician fleet. However, both satellite providers looked to have abandoned that philosophy. DirecTV just announced rate increase that range from $3 to $8 per month for various packages. They also raised the price for regional sports networks by $1. Dish just announced rate increases that average $6 per month for its packages. These are the two largest rate increases in the history of these companies and will shrink the difference between satellite and terrestrial cable prices.

These rate increases will make it easier for rural cable providers to compete. Many of them have tried to keep rates within a reasonable range of the satellite providers, and these rate increases will shrink the differences in rates.

In the long run the consequences of not having the satellite option will create even more change in a fast-changing industry. For years the satellite companies have been the biggest competitor of the big cable companies – and they don’t just serve in rural America. I recently did a survey in a community of 20,000 where almost half of the households use satellite TV. As the satellite companies drop subscribers, some of them will revert to traditional cable providers. The recent price increases ought to accelerate that shift.

Nobody has a crystal ball for the cable industry. Just a year ago it seemed like industry-wide consensus that we were going to see a rapid acceleration of cord cutting. While cord cutting gets a lot of headlines, it hasn’t yet grown to nearly the same magnitude of change that we saw with households dropping telephone landlines. Surprisingly, even after nearly a decade of landline losses there are still around 40% of homes with a landline. Will we see the same thing with traditional cable TV, or will the providers push customers online?

Recently I’ve seen a spate of articles talking about how it’s becoming as expensive to buy online programming as it is to stick with cable companies, and if this becomes the public perception, we might see a slowdown in the pace of cord cutting. It’s possible that traditional cable will be around for a long time. The satellite cable companies lost money for many years, mostly due to low prices. It’s possible that after a few more big rate increases that these companies might become profitable and reconsider their future.

Windstream Turns Focus to Wireless

Windstream CEO Tony Thomas recently told investors that the company plans to stress wireless technology over copper going into the future. The company has been using point-to-point wireless to serve large businesses for several years. The company has more recently been using fixed point-to-multipoint wireless technology to satisfy some of it’s CAF II build-out requirements.

Thomas says that the fixed wireless technology blows away what could be provided over the old copper plant with DSL. In places with flat and open terrain like Iowa and Nebraska the company is seeing rural residential broadband speeds as fast as 100 Mbps with wireless – far faster than can be obtained with DSL.

Thomas also said that the company is also interested in fixed 5G deployments, similar to what Verizon is now starting to deploy – putting 5G transmitters on poles to serve nearby homes. He says the company is interested in the technology in places where they are ‘fiber rich’. While Windstream serves a lot of extremely rural locations, there also serve a significant number of towns and small cities in their incumbent service areas that might be good candidates for 5G.

The emphasis on wireless deployments puts Windstream on the same trajectory as AT&T. AT&T has made it clear numerous times to the FCC that they company would like to tear down rural copper wherever it can to serve customers with wireless. AT&T’s approach differs in that AT&T will be using its licensed cellular spectrum and 4G LTE in rural markets while Windstream would use unlicensed spectrum like various WISPs.

This leads me to wonder if Windstream will join the list of big telcos that will largely ignore its existing copper plant moving into the future. Verizon has done it’s best to sell rural copper to Frontier and seems to be largely ignoring its remaining copper plant – it’s the only big telcos that didn’t even bother to chase the CAF II money that could have been used to upgrade rural copper.

The new CenturyLink CEO made it clear that the company has no desire to make any additional investments that will earn ‘infrastructure returns’, meaning investing in last mile networks, both copper and fiber. You can’t say that Frontier doesn’t want to continue to support copper, but the company is clearly cash-stressed and is widely reported to be ignoring needed upgrades and repairs to rural copper networks.

The transition from copper to wireless is always scary for a rural area. It’s great that Windstream can now deliver speeds up to 100 Mbps to some customers. However, the reality of wireless networks are that there are always some customers who are out of reach of the transmitters. These customers may have physical impediments such as being in a valley or behind a hill and out of line-of-sight from towers. Or customers might just live to far away from a tower since all of the wireless technologies only work for some fixed distance from a tower, depending upon the specific spectrum being used.

It makes no sense for a rural telco to operate two networks, and one has to wonder what happens to the customers that can’t get the wireless service when the day comes when the copper network gets torn down. This has certainly been one of the concerns at the FCC when considering AT&T’s requests to tear down copper. The current FCC has relaxed the hurdles needed to tear down copper and so this situation is bound to arise. In the past the telcos had carrier of last-resort obligations for anybody living in the service area. Will they be required to somehow get wireless signal to those customers that fall between the cracks? I doubt that anybody will force them to do so. It’s not far-fetched to imagine customers living within a regulated telcos service area who can’t get telephone or broadband service from the telco.

Customers in these areas also have to be concerned with the future. We have wide experience that the current wireless technologies don’t last very long. We’ve seen electronics wear out and become functionally obsolete within seven years. Will Windstream and the other telcos chasing the wireless technology path dedicate enough capital to constantly replace electronics? We’ll have to wait for that answer – but experience says that they will cut corners to save money.

I also have to wonder what happens to the many parts of the Windstream service areas that are too hilly or too wooded for the wireless technology. As the company becomes wireless-oriented will they ignore the parts of the company stuck with copper? I just recently visited some rural counties that are heavily wooded, and which were told by local Windstream staff that the upgrades they’ve already seen on copper (which did not seem to make much difference) were the last upgrades they might ever see. If Windstream joins the other list of big telcos that will ignore rural copper, then these networks will die a natural death from neglect. The copper networks of all of the big telcos are already old and it won’t take much neglect to push these networks into the final death spiral.

Can Cable Fight 5G?

The big cable companies are clearly worried about 5G. They look at the recently introduced Verizon 5G product and they understand that they are going to see something similar over time in all of their metropolitan markets. Verizon is selling 5G broadband – currently at 300 Mbps second, but promised to get faster in the future – for $70 standalone or for $50 for those with Verizon cellular.

This is the nightmare scenario for them because they have finally grown to the point where they are approaching a near monopoly in most markets. They have successfully competed with DSL and quarter after quarter have been taking DSL customers from the telcos. In possibly the last death knell for DSL, both Comcast and Charter recently increased speeds of their base products to at least 200 Mbps. Those speeds makes it hard for anybody to justify buying DSL at 50 Mbps or slower.

The big cable companies have started to raise broadband rates to take advantage of their near-monopoly situation. Charter just recently raised bundled broadband prices by $5 per month – the biggest broadband price increase I can remember in a decade or more. Last year a major Wall Street analyst advised Comcast that their basic broadband price ought to be $90.

But now comes fixed 5G. It’s possible that Verizon has found a better bundle than the cable companies because of the number of households that already have cellphones. It’s got to be tempting to homes to buy fast broadband for only $50 per month in a bundle.

This fixed 5G competition won’t come over night. Verizon is launching 5G in urban markets where they already have fiber. Nobody knows how fast they will really implement the product, due mostly to distrust of a string of other Verizon hype about 5G. But over time the fixed 5G will hit markets. Assuming Verizon is successful, then others will follow them into the market. I’m already seeing some places where companies American Tower are building 5G ‘hotels’ at poles, which are vaults large enough to accommodate several 5G providers at the same location.

We got a clue recently about how the cable companies might fight back against 5G. A number of big cable companies like Comcast, Charter, Cox and Midco announced that they will be implementing the new 10 Gbps technology upgrade from CableLabs. These cable companies just recently introduced gigabit service using DOCSIS 3.1. It looks like the cable companies will fight against 5G with speed. It sounds like they will advertise speeds far faster than the 5G speeds and try to win the speed war.

But there is a problem with that strategy. Cable systems with the DOCSIS 3.1 upgrade can clearly offer gigabit speeds, but in reality cable company networks aren’t ready or able to deliver that much speed to everybody. Fiber networks can easily deliver a gigabit to every customer, and with an electronics upgrade can offer 10 Gbps to everybody, as is happening in parts of South Korea. But cable networks have an inherent weakness that makes gigabit speed problematical.

Cable networks are still shared networks and all of the customers in a node share the bandwidth. Most cable nodes are still large with 150 – 300 customers in each neighborhood node, and some with many more. If even a few customers start really use gigabit speeds then the speed for everybody else in the node will deteriorate. That’s the issue that caused cable networks to bog done in the evenings a decade ago. Cable companies fixed the problem then by ‘splitting’ the nodes, meaning that they build more fiber to reduce the number of homes in each node. If the cable companies want to really start pushing gigabit broadband, and even faster speeds, then they are faced with that same dilemma again and they will need another round, or even two rounds of node splits.

For now I have serious doubts about whether Comcast and Charter are even serious about their gigabit products. Comcast gigabit today costs $140 plus $10 for the modem. The prices are lower in markets where the company is competing against fiber, and customers can also negotiate contract deals to get the gigabit price closer to $100. Charter has similar pricing – in Oahu where there is competition they offer a gigabit for $105, and their price elsewhere seem to be around $125.

Both of these companies are setting gigabit prices far above Google’s Fiber’s $70 gigabit. The current cable company gigabit is not a serious competitor to Verizon’s $50 – $70 price for 300 Mbps. I have a hard time thinking the cable companies can compete on speed alone – it’s got to be a combination of speed and price. The cable companies can compete well against 5G if they are willing to price a gigabit at the $70 Verizon 5G price and then use their current $100+ price for 10 Gbps. That pricing strategy will cost them a lot of money in node upgrades, but they would be smart to consider it. The biggest cable companies have already admitted that their ultimate network needs to be fiber – but they’ve been hoping to milk the existing coaxial networks for another decade or two. Any work they do today to reduce node size would be one more step towards an eventual all-fiber network – and could help to stave off 5G.

It’s going to be an interesting battle to watch, because if we’ve learned anything in this industry it’s that it’s hard to win customers back after you lose them. The cable companies currently have most of the urban broadband customers and they need to act now to fight 5G – not wait until they have lost 30% of the market.

Facebook Takes a Stab at Wireless Broadband

Facebook has been exploring two technologies in its labs that they hope will make broadband more accessible for the many communities around the world that have poor or zero broadband. The technology I’m discussing today is Terragraph which uses an outdoor 60 GHz network to deliver broadband. The other is Project ARIES which is an attempt to beef up the throughput on low-bandwidth cellular networks.

The Terragraph technology was originally intended as a way to bring street-level WiFi to high-density urban downtowns. Facebook looked around the globe and saw many large cities that lack basic broadband infrastructure – it’s nearly impossible to fund fiber in third world urban centers. The Terragraph technology uses 60 GHz bandwidth and the 802.11ay standard – this technology combination was originally called AirGig.

Using 60GHz and 801.11ay together is an interesting choice for an outdoor application. On a broadcast basis (hotspot) this frequency only carries between 35 and 100 feet depending upon humidity and other factors. The original intended use of the AirGig was as an indoor gigabit wireless network for offices. The 60 GHz spectrum won’t pass through anything, so it was intended to be a wireless gigabit link within a single room. 60 GHz faces problems as an outdoor technology since the frequency is absorbed by both oxygen and water vapor. But numerous countries have released 60Ghz as unlicensed spectrum, making it available without costly spectrum licenses, and the channels are large enough to still be able to deliver bandwidth even with the physical limitations.

It turns out that a focused beam of 60 GHz spectrum will carry up to about 250 meters when used as backhaul. The urban Terragraph network planned to mount 60 GHz units on downtowns poles and buildings. These units would act as both hotspots and to create a backhaul mesh network between units. This is similar to the WiFi networks we saw being tried in a few US cities almost twenty years ago. The biggest downside to the urban idea is the lack of cheap handsets that can use this frequency.

Facebook took a right turn on the urban idea and completed a trial of the technology deployed in a different network design. Last May Facebook worked with Deutsche Telekom to deploy a fixed Terragraph network in Mikebuda, Hungary. This is a small town of about 150 homes covering 0.4 square kilometers – about 100 acres. This is drastically different than a dense urban deployment with a far lower housing density than US suburbs – this is similar to many small rural towns in the US with large lots, and empty spaces between homes. The only current broadband in the town was about 100 DSL customers.

In a fixed mesh network every unit deployed is part of the mesh network each unit can deliver bandwidth into that home as well as bounce signal to the next home. In Mikebuda the two companies decided that the ideal network would be to serve 50 homes (not sure why they couldn’t serve all 100 of the DSL customers). The network is delivering about 650 Mbps to each home, although each home is limited to about 350 Mbps due to the limitations of the 802.11ac WiFi routers inside the home. This is a big improvement over the 50 Mbps DSL that is being replaced.

The wireless mesh network is quick to install and the network was up and running to homes within two weeks. The mesh network configures itself and can instantly reroute and heal to replace a bad mesh unit. The biggest local drawback is the need for pure line-of-sight since 60 GHz can’t tolerate any foliage or other impediments, and tree trimming was needed to make this work.

Facebook envisions this fixed deployment as a way to bring bandwidth to the many smaller towns that surround most cities. However, they admit in the third world that the limitation will be for backhaul bandwidth since the third world doesn’t typically have much middle mile fiber outside of cities – so figuring out how to get the bandwidth to the small towns is a bigger challenge than serving the homes within a town. Even in the US, the cost of bandwidth to reach a small town is often the limiting factor on affordably building a broadband solution. In the US this will be a direct competitor to 5G for serving small towns. The Terragraph technology has the advantage of using unlicensed spectrum, but ISPs are going to worry about the squirrelly nature of 60 GHz spectrum.

Assuming that Facebook can find a way to standardize the equipment and get it into mass production, then this is another interesting wireless technology to consider. Current point-to-multipoint wireless network don’t work as well in small towns as they do in rural areas, and this might provide a different way for a WISP to serve a small town. In the third world, however, the limiting factor for many of the candidate markets will be getting backhaul bandwidth to the towns.

The Physics of Millimeter Wave Spectrum

Many of the planned used for 5G rely upon the use of millimeter wave spectrum, and like every wireless technology the characteristics of the spectrum defines both the benefits and limitations of the technology. Today I’m going to take a shot at explaining the physical characteristics of millimeter wave spectrum without using engineering jargon.

Millimeter wave spectrum falls in the range of 30 GHz to 300 GHz, although currently there has been no discussion yet in the industry of using anything higher than 100 GHz. The term millimeter wave describes the shortness of the radio waves which are only a few millimeters or less in length. The 5G industry is also using spectrum that is a little longer than millimeter waves size such as 24 GHz and 28 GHz – but these frequencies share a lot of the same operating characteristics.

There are a few reasons why millimeter wave spectrum is attractive for transmitting data. The millimeter spectrum has the capability of carrying a lot of data, which is what prompts discussion of using millimeter wave spectrum to deliver gigabit wireless service. If you think of radio in terms of waves, then the higher the frequency the greater the number of waves that are being emitted in a given period of time. For example, if each wave carries one bit of data, then a 30 GHz transmission can carry more bits in one second than a 10 GHz transmission and a lot more bits than a 30 MHz transmission. It doesn’t work exactly like that, but it’s a decent analogy.

This wave analogy also defines the biggest limitation of millimeter wave spectrum – the much shorter effective distances for using this spectrum. All radio waves naturally spread from a transmitter, and in this case thinking of waves in a swimming pool is also a good analogy. The further across the pool a wave travels, the more dispersed the strength of the wave. When you send a big wave across a swimming pool it’s still pretty big at the other end, but when you send a small wave it’s often impossible to even notice it at the other side of the pool. The small waves at millimeter length die off faster. With a higher frequency the waves are also closer together. Using the pool analogy, that means that the when waves are packed tightly together then can more easily bump into each other and become hard to distinguish as individual waves by the time they get to the other side of the pool. This is part of the reason why shorter millimeter waves don’t carry as far as other spectrum.

It would be possible to send millimeter waves further by using more power – but the FCC limits the allowed power for all radio frequencies to reduce interference and for safety reasons. High-power radio waves can be dangerous (think of the radio waves in your microwave oven). The FCC low power limitation greatly reduces the carrying distance of this short spectrum.

The delivery distance for millimeter waves can also be impacted by a number of local environmental conditions. In general, shorter radio waves are more susceptible to disruption than longer spectrum waves. All of the following can affect the strength of a millimeter wave signal:

  • Mechanical resonance. Molecules of air in the atmosphere naturally resonate (think of this as vibrating molecules) at millimeter wave frequencies, with the biggest natural interference coming at 24 GHz and 60 GHz.
  • Atmospheric absorption. The atmosphere naturally absorbs (or cancels out) millimeter waves. For example, oxygen absorption is highest at 60 GHz.
  • Millimeter waves are easily scattered. For example, the millimeter wave signal is roughly the same size as a raindrop, so rain will scatter the signal.
  • Brightness temperature. This refers to the phenomenon where millimeter waves absorb high frequency electromagnetic radiation whenever they interact with air or water molecules, and this degrades the signal.
  • Line-of-sight. Millimeter wave spectrum doesn’t pass through obstacles and will be stopped by leaves and almost everything else in the environment. This happens to some degree with all radio wavs, but at lower frequencies (with longer wavelengths) the signal can still get delivered by passing through or bouncing off objects in the environment (such as a neighboring house and still reach the receiver. However, millimeter waves are so short that they are unable to recover from collision with an object between the transmitter and receiver and thus the signal is lost upon collision with almost anything.

One interesting aspect of these spectrum is that the antennas used to transmit and receive millimeter wave spectrum are tiny and you can squeeze a dozen or more antenna into a square inch. One drawback of using millimeter wave spectrum for cellphones is that it takes a lot of power to operate multiple antennas, so this spectrum won’t be practical for cellphones until we get better batteries.

However, the primary drawback of small antennas is the small target area used to receive a signal. It doesn’t take a lot of spreading and dispersion of the signal to miss the receiver. For spectrum in the 30 GHz range the full signal strength (and maximum bandwidth achievable) to a receiver can only carry for about 300 feet. With greater distances the signal continues to spread and weaken, and the physics show that the maximum distance to get any decent bandwidth at 30 GHz is about 1,200 feet. It’s worth noting that a receiver at 1,200 feet is receiving significantly less data than one at a few hundred feet. With higher frequencies the distances are even less. For example, at 60 GHz the signal dies off after only 150 feet. At 100 GHz the signal dies off in 4 – 6 feet.

To sum all of this up, millimeter wave transmission requires a relatively open path without obstacles. Even in ideal conditions a pole-mounted 5G transmitter isn’t going to deliver decent bandwidth past about 1,200 feet, with the effective amount of bandwidth decreasing as the signal travels more than 300 feet. Higher frequencies mean even less distance. Millimeter waves will perform better in places with few obstacles (like trees) or where there is low humidity. Using millimeter wave spectrum presents a ton of challenges for cell phones – the short distances are a big limitation as well as the extra battery life needed to support extra antennas. Any carrier that talks about deploying millimeter wave in a way that doesn’t fit the basic physics is exaggerating their plans.

Putting Skin in the Game for Broadband

Recently, Anne Hazlett, the Assistant to the Secretary for Rural Development at the USDA was quoted in an interview with Telecompetitor saying, “We believe the federal government has a role (in rural broadband), but we also need to see skin in the game from states and local communities because this is an issue that really touches the quality of life in rural America”.

This is a message that I have been telling rural communities for at least five years. Some communities are lucky enough to be served by an independent telco or an electric cooperative that is interested in expanding into fiber broadband. However, for most of rural America there is nobody that will bring the broadband they need to survive as a community.

Five years ago this message was generally not received well because local communities didn’t feel enough pressure from citizens to push hard for a broadband solution. But the world has changed and now I often hear that lack of broadband is the number one concern of rural counties and towns with poor broadband. We now live in a society where broadband has grown to become a basic necessity for households similar to water and electricity. Homes without broadband are being left behind.

When I’m approached today by a rural county, one of the first questions I ask them is if they have considered putting money into broadband. More and more rural areas are willing to have that conversation. In Minnesota I can think of a dozen counties that have decided they will pledge $1 million to $6 million to get broadband to the unserved parts of their county – these are pledges to make outright grants to help pay for the cost of a fiber network.

States are also starting to step up. Just a few year ago there were only a few states with grant programs to help jump start rural broadband projects. I need to start a list to get a better count, but there are now at least a dozen states that either have or are in the process of creating a state broadband grant program.

I don’t want to belittle any of the state broadband grant programs, because any state funding for broadband will helps to bring broadband to places that would otherwise not get it. But all of the state broadband grant programs are far too small. Most of the existing state grant programs allocate between $10 – $40 million annually towards solving a broadband problem that I’ve seen estimated at $40 – $60 billion nationwide. The grants are nice and massively appreciated by the handful of customers who benefit with each grant – but this doesn’t really fit into the category of putting skin in the game at the state level.

The federal programs are the same way. The current e-Connectivity program at $600 million sounds like a lot of assistance for broadband. But this money is not all grants and a significant amount of it will be loans that have to be repaid. Even if this was 100% grant money, if the national cost to bring rural fiber is $60 billion, then this year’s program would help to fund 1% of the national broadband shortfall – all we need to do is to duplicate the program for a century to solve the broadband deficit. If this program was to be spread evenly across the country, it’s only $12 million per state.

For many years we’ve been debating if government ought to help in funding rural broadband. In some ways it’s hard to understand why we are having this debate since in the past the country quickly got behind the idea of the government helping to fund rural electricity, rural telephony and rural roads. It seemed obvious that the whole country benefits when these essential services are brought to everybody. I’ve never seen any criticism that those past programs weren’t successful – because the results of these efforts were instantly obvious.

There is nobody anywhere asking governments to outright pay for broadband networks – although some local governments are desperate enough to consider this when there is no other solution. Building rural fiber – which is what everybody wants – is expensive and putting skin in the game means helping to offset enough of the cost in order to enable a commercial provider to make a viable business plan for fiber.

I wrote a blog in December that references a study done by economists at Purdue who estimate that the benefit of rural fiber is around $25,000 per household. I look at the results of the study and think it’s conservative – but even if the results are a little high this ought to be all of the evidence we need to justify governments at all levels putting more skin in the same.

When I see a rural county with a small population talking about pledging millions of dollars towards getting broadband I see a community that is really putting skin in the game, because that is a major financial commitment. For many counties this will be the largest amount of money they have ever spent for anything other than roads. By contrast, a state grant program of $20 million per year when the state budget might be $20 billion is barely acknowledging that broadband is a problem in their state.

I’m sure I’m going to hear back from those who say I’m being harsh on the state and federal grant program, and that any amount of funding is helpful. I agree, but if we are going to solve the broadband problem it means putting skin into the game – and by definition that means finding enough money to put a meaningful dent in the problem. To me that’s what skin in the game means.