Massive MIMO

One of the technologies that will bolster 5G cellular is the use of massive MIMO (multiple-input, multiple-output) antenna arrays. Massive MIMO is an extension of smaller MIMO antennas that have been use for several years. For example, home WiFi routers now routinely use multiple antennas to allow for easier connections to multiple devices. Basic forms of the MIMO technology have been deployed in LTE cell sites for several years.

Massive MIMO differs from current technology by the use of big arrays of antennas. For example, Sprint, along with Nokia demonstrated a massive MIMO transmitter in 2017 that used 128 antennas, with 64 for receive and 64 for transmit. Sprint is in the process of deploying a much smaller array in cell sites using the 2.5 GHz spectrum.

Massive MIMO can be used in two different ways. First, multiple transmitter antennas can be focused together to reach a single customer (who also needs to have multiple receivers) to increase throughput. In the Sprint trial mentioned above Sprint and Nokia were able to achieve a 300 Mbps connection to a beefed-up cellphone. That’s a lot more bandwidth than can be achieved from one transmitter, which at the most could deliver whatever bandwidth is possible on the channel of spectrum being used.

The extra bandwidth is achieved in two ways. First, using multiple transmitters means that multiple channels of the same frequency can be sent simultaneously to the same receiving device. Both the transmitter and receiver must have the sophisticated and powerful computing power to coordinate and combine the multiple signals.

The bandwidth is also boosted by what’s called precoding or beamforming. This technology coordinates the signals from multiple transmitters to maximize the received signal gain and to reduce what is called the multipath fading effect. In simple terms the beamforming technology sets the power level and gain for each separate antenna to maximize the data throughput. Every frequency and its channel operates a little differently and beamforming favors the channels and frequency with the best operating capabilities in a given environment. Beamforming also allows for the cellular signal to be concentrated in a portion of the receiving area – to create a ‘beam’. This is not the same kind of highly concentrated beam that is used in microwave transmitters, but the concentration of the radio signals into the general area of the customer means a more efficient delivery of data packets.

The cellular companies, though, are focused on the second use of MIMO – the ability to connect to more devices simultaneously. One of the key parameters of the 5G cellular specifications is the ability of a cell site to make up to 100,000 simultaneous connections. The carriers envision 5G is the platform for the Internet of Things and want to use cellular bandwidth to connect to the many sensors envisioned in our near-future world. This first generation of massive MIMO won’t bump cell sites to 100,000 connections, but it’s a first step at increasing the number of connections.

Massive MIMO is also going to facilitate the coordination of signals from multiple cell sites. Today’s cellular networks are based upon a roaming architecture. That means that a cellphone or any other device that wants a cellular connection will grab the strongest available cellular signal. That’s normally the closest cell site but could be a more distant one if the nearest site is busy. With roaming a cellular connection is handed from one cell site to the next for a customer that is moving through cellular coverage areas.

One of the key aspects of 5G is that it will allow multiple cell sites to connect to a single customer when necessary. That might mean combining the signal from a MIMO antenna in two neighboring cell sites. In most places today this is not particularly useful since cell sites today tend to be fairly far apart. But as we migrate to smaller cells the chances of a customer being in range of multiple cell sites increases. The combining of cell sites could be useful when a customer wants a big burst of data, and coordinating the MIMO signals between neighboring cell sites can temporarily give a customer the extra needed bandwidth. That kind of coordination will require sophisticated operating systems at cell sites and is certainly an area that the cellular manufacturers are now working on in their labs.

More Crowding in the OTT Market

It seems like I’ve been seeing news almost weekly about new online video providers. This will put even more pressure on cable companies as more people find an online programming option to suit them. This also means that a likely shakeout of the OTT industry with such a crowded field of competitors all vying for the same pool of cord-cutters.

NewTV. This is an interesting new OTT venture that was founded by Jeffrey Katzenberg, former chairman of Walt Disney and headed by Meg Whitman, former CEO of Hewlett Packard Enterprise and also from Disney. The company has raised $1 billion in and has support from every major Hollywood studio including 21st Century Fox, Disney, NBCUniversal, Sony Pictures Entertainment, and Viacom.

Rather than take on Netflix and other OTT content directly the company plans to develop short 10-minute shows aimed exclusively at cellphone users. They plan both free content supported by advertising and a subscription plan that would use the ‘advertising-light’ option used by Hulu.

AT&T already owns a successful OTT product with HBO Now that has over 5 million customers. John Stankey, the head of WarnerMedia says the plan is to create additional bundles of content centered around HBO that bring in other WarnerMedia content and selected external content. He admits that HBO alone does not represent enough content to be a full-scale OTT alternative for customers.

AT&T’s goal is to take advantage of HBO’s current reputation and to position their content in the market as premium and high quality as a way to differentiate themselves from other OTT providers.

Apple has been talking about getting into the content business for a decade, and they have finally pulled the trigger. The company invested $1 billion this year and now has 24 original series in production as the beginning of a new content platform. Among the new shows is a series about a morning TV show starring Reese Witherspoon and Jennifer Aniston.

The company hired Jamie Erlicht and Zack Van Amburg from Sony Pictures Television to operate the new business and has since hired other experienced television executives. They also are working on other new content and just signed a multiyear deal with Oprah Winfrey. The company has not announced any specific plans for airing and using the new content, but that will be coming soon since the first new series will probably be ready by March of 2019.

T-Mobile. As part of the proposed merger with Sprint, T-Mobile says they plan to launch a new ‘wireless first’ TV platform that will deliver 4K video using its cellular platform. On January T-Mobile purchased Layer3 which has been offering a 275 channel HD line-up in a few major markets.

The T-Mobile offering will be different than other OTT in that the company is shooting for what they call the quad play that bundles video, in-home broadband (delivered using cellular frequency), mobile broadband and voice. The company says that the content will only be made available to T-Mobile customers and they view it as a way to reduce churn and gain cellular market share.

The Layer 3 subsidiary will also continue to pursue partnerships to gain access to customers through fiber networks, such as the arrangement they currently have with the municipal fiber network in Longmont, Colorado.

Disney. Earlier this year the company announced the creation of a direct-to-consumer video service based upon the company’s huge library of popular content. Disney gained the needed technology by purchasing BAMTech, the company that supports Major League Baseball online. Disney also is bolstering its content portfolio through the purchase of Twenty-First Century Fox.

Disney plans to launch an ESPN-based sports bundle in early 2019. They have not announced specific plans on how and when to launch the rest of their content, but they canceled an agreement with Netflix for carrying Disney content.

FCC Speed Tests for ISPs

ISPs awarded CAF II funding in the recent auction need to be aware that they will be subject to compliance testing for both latency and speeds on their new broadband networks. There are financial penalties for those failing to successfully meet these tests. The FCC revised the testing standards in July in Docket DA 18-710. These new testing standards become effective with testing starting in the third quarter of 2019. There new standards will replace the standards already in place for ISPs that receive funding from earlier rounds of the CAF program as well as ISPs getting A-CAM or other rate-of-return USF funding.

ISPs can choose between three methods for testing. First, they may elect what the FCC calls the MBA program, which uses an external vendor, approved by the FCC, to perform the testing. This firm has been testing speeds for the network built by large telcos for many years. ISPs can also use existing network tools if they are built into the customer CPE that allow test pinging and other testing methodologies. Finally, an ISP can install ‘white boxes’ that provide the ability to perform the tests.

The households to be tested are chosen at random by the ISP every two years. The FCC doesn’t describe a specific method for ensuring that the selections are truly random, but the ISP must describe to the FCC how this is done. It wouldn’t be hard for an ISP to fudge the results of the testing if they make sure that customers from slow parts of their network are not in the testing sample.

The number of tests to be conducted varies by the number of customers for which a recipient is getting CAF support; if the number is CAF households is 50 or fewer they must test 5 customers; if there are 51-500 CAF households they must test 10% of households. For 500 or greater CAF households they must test 50. ISPs that declare a high latency must test more locations with the maximum being 370.

ISPs must conduct the tests for a solid week, including weekends in every quarter to eliminate seasonality. Tests must be conducted in the evenings between 6:00 PM and 12:00 PM. Latency tests must be done every minute during the six-hour testing window. Speed tests – run separately for upload speeds and download speeds – must be done once per hour during the 6-hour testing window.

The FCC has set expected standards for the speed tests. These standards are based upon the required speeds of a specific program – such as the first CAF II program that required speeds of at least 10/1 Mbps. In the latest CAF program the testing will be based upon the speeds that the ISP declared they could meet when entering the action – speeds that can be as fast as 1 Gbps.

ISPs are expected to meet latency standards 95% of the time. Speed tests must achieve 80% of the expected upland and download speed 80% of the time. This might surprise people living in the original CAF II areas, because the big telcos only need to achieve download speeds of 8 Mbps for 80% of customers to meet the CAF standard. The 10/1 Mbps standard was low enough, but this lets the ISPs off the hook for underperforming even for that incredibly slow speed. This requirement means that an ISP guaranteeing gigabit download speeds needs to achieve 800 Mbps 80% of the time. ISPs that meet the speeds and latencies for 100% of customers are excused from quarterly testing and only have to test once per year.

There are financial penalties for ISPs that don’t meet these tests.

  • ISPs that have between 85% and 100% of households that meet the test standards lose 5% of their FCC support.
  • ISPs that have between 70% and 85% of households that meet the test standards lose 10% of their FCC support.
  • ISPs that have between 55% and 75% of households that meet the test standards lose 15% of their FCC support.
  • ISPs with less than 55% of compliant households lose 25% of their support.

For CAF II auction winners these reductions in funding would only be applied to the remaining time periods after they fail the tests. This particular auction covers a 10-year period of time and the testing would start once the new networks are operational, which is required to be completed between years 3 and 6 after funding.

This will have the biggest impact on ISPs that overstated their network capability. For instance, there were numerous ISPs that claimed the ability in the CAF auction to deliver 100 Mbps and they are going to lose 25% of the funding if they deliver speeds slower than 80 Mbps.

The Continued Growth of Data Traffic

Every one of my clients continues to see explosive growth of data traffic on their broadband networks. For several years I’ve been citing a statistic used for many years by Cisco that says that household use of data has doubled every three years since 1980. In Cisco’s last Visual Networking Index published in 2017 the company predicted a slight slowdown in data growth to now double about every 3.5 years.

I searched the web for other predictions of data growth and found a report published by Seagate, also in 2017, titled Data Age 2025: The Evolution of Data to Life-Critical. This report was authored for Seagate by the consulting firm IDC.

The IDC report predicts that annual worldwide web data will grow from the 16 zettabytes of data used in 2016 to 163 zettabytes in 2025 – a tenfold increase in nine years. A zettabyte is a mind-numbingly large number that equals a trillion gigabytes. That increase means an annual compounded growth rate of 29.5%, which more than doubles web traffic every three years.

The most recent burst of overall data growth has come from the migration of video online. IDC expects online video to keep growing rapidly, but also foresees a number of other web uses that are going to increase data traffic by 2025. These include:

  • The continued evolution of data from business background to “life-critical”. IDC predicts that as much as 20% of all future data will become life-critical, meaning it will directly impact our daily lives, with nearly half of that data being hypercritical. As an example, they mention the example of how a computer crash today might cause us to lose a spreadsheet, but that data used to communicate with a self-driving car must be delivered accurately. They believe that the software needed to ensure such accuracy will vastly increase the volume of traffic on the web.
  • The proliferation of embedded systems and the IoT. Today most IoT devices generate tiny amounts of data. The big growth in IoT data will not come directly from the IoT devices and sensors in the world, but from the background systems that interpret this data and make it instantly usable.
  • The increasing use of mobile and real-time data. Again, using the self-driving car as an example, IDC predicts that more than 25% of data will be required in real-time, and the systems necessary to deliver real-time data will explode usage on networks.
  • Data usage from cognitive computing and artificial intelligence systems. IDC predicts that data generated by cognitive systems – machine learning, natural language processing and artificial intelligence – will generate more than 5 zettabytes by 2025.
  • Security systems. As we have more critical data being transmitted, the security systems needed to protect the data will generate big volumes of additional web traffic.

Interestingly, this predicted growth all comes from machine-to-machine communications that are a result of us moving more daily functions onto the web. Computers will be working in the background exchanging and interpreting data to support activities such as traveling in a self-driving car or chatting with somebody in another country using a real-time interpreter. We are already seeing the beginning stages of numerous technologies that will require big real time data.

Data growth of this magnitude is going to require our data networks to grow in capacity. I don’t know of any client network that is ready to handle a ten-fold increase in data traffic, and carriers will have to beef up backbone networks significantly over time. I have often seen clients invest in new backbone electronics that they hoped to be good for a decade, only to find the upgraded networks swamped within only a few years. It’s hard for network engineers and CEOs to fully grasp the impact of continued rapid data growth on our networks and it’s more common than not to underestimate future traffic growth.

This kind of data growth will also increase the pressure for faster end-user data speeds and more robust last-mile networks. If a rural 10 Mbps DSL line feels slow today, imagine how slow that will feel when urban connections are far faster than today. If the trends IDC foresees hold true, by 2025 there will be many homes needing and using gigabit connections. It’s common, even in the industry to scoff at the usefulness of residential gigabit connections, but when our use of data needs keeps doubling it’s inevitable that we will need gigabit speeds and beyond.

Going Wireless-only for Broadband

According to New Street Research (NSR), up to 14% of homes in the US could go all-wireless for broadband. They estimate that there are 17 million homes which are small enough users of bandwidth to justify satisfying their broadband needs strictly using a cellular connection. NSR says that only about 6.6 million homes have elected to go all-wireless today, meaning there is a sizable gap of around 10 million more homes for which wireless might be a reasonable alternative.

The number of households that are going wireless-only has been growing. Surveys by Nielsen and others have shown that the trend to go wireless-only is driven mostly by economics, helped by the ability of many people to satisfy their broadband demands using WiFi at work, school or other public places.

NSR also predicts that the number of homes that can benefit by going wireless-only will continue to shrink. They estimate that only 14 million homes will benefit by going all-wireless within five years – with the decrease due to the growing demand of households for more broadband.

There are factors that make going wireless an attractive alternative for those that don’t use much broadband. Cellular data speeds have been getting faster as cellular carriers continue to implement full 4G technology. The first fully compliant 4G cell site was activated in 2017 and full 4G is now being deployed in many urban locations. As speeds get faster it becomes easier to justify using a cellphone for broadband.

Of course, cellular data speeds need to be put into context. A good 4G connection might be in the range of 15 Mbps. That speed feels glacial when compared to the latest speeds offered by cable companies. Both Comcast and Charter are in the process of increasing data speeds for their basic product to between 100 Mbps and 200 Mbps depending upon the market. Cellphones also tend to have sluggish operating systems that are tailored for video and that can make regular web viewing feel slow and clunky.

Cellular data speeds will continue to improve as we see the slow introduction of 5G into the cellular network. The 5G specification calls for cellular data speeds of 100 Mbps download when 5G is fully implemented. That transition is likely to take another decade, and even when implemented isn’t going to mean fast cellular speeds everywhere. The only way to achieve 100 Mbps speeds is by combining multiple spectrum paths to a given cellphone user, probably from multiple cell sites. Most of the country, including most urban and suburban neighborhoods are not going to be saturated with multiple small cell sites – the cellular companies are going to deploy faster cellular speeds in areas that justify the expenditure. The major cellular providers have all said that they will be relying on 4G LTE cellular for a long time to come.

One of the factors that is making it easier to go wireless-only is that people have access throughout the day to WiFi, which is powered from landline broadband. Most teenagers would claim that they use their cellphones for data, but most of them have access to WiFi at home and school and at other places they frequent.

The number one factor that drives people to go all-wireless for data is price. Home broadband is expensive by the time you add up all of the fees from a cable company. Since most people in the country already has a cellphone then dropping the home broadband connection is a good way for the budget-conscious to control their expenses.

The wireless carriers are also making it easier to go all wireless by including some level of video programming with some cellular plans. These are known as zero-rating plans that let a customer watch some video for free outside of their data usage plan. T-Mobile has had these plans for a few years and they are now becoming widely available on many cellular plans throughout the industry.

The monthly data caps on most wireless plans are getting larger. For the careful shopper who lives in an urban area there are usually a handful of truly unlimited data plans. Users have learned, though, that many such plans heavily restrict tethering to laptops and other devices. But data caps have creeped higher across-the-board in the industry compared to a few years ago. Users who are willing to pay more for data can now buy the supposedly unlimited data plans from the major carriers that are actually capped between 20 – 25 GB per month.

There are always other factors to consider like cellular coverage. I happen to live in a hilly wooded town where coverage for all of the carriers varies block by block. There are so many dead spots in my town that it’s challenging to use cellular even for voice calls. I happen to ride Uber a lot and it’s frustrating to see Uber drivers get close to my neighborhood and get lost when they lose their Verizon signal. This city would be a hard place to rely only on a cellphone. Rural America has the same problem and regardless of the coverage maps published by the cellular companies there are still huge areas where rural cellular coverage is spotty or non-existent.

Another factor that makes it harder to go all-wireless is working from home. Cellphones are not always adequate when trying to log onto corporate WANs or for downloading and working on documents, spreadsheets and PowerPoints. While tethering to a computer can solve this problem, it doesn’t take a lot of working from home to surpass the data caps on most cellular plans.

I’ve seen a number of articles in the last few years talking claiming that the future is wireless and that we eventually won’t need landline broadband. This claim ignores the fact that the amount of data demanded by the average household is doubling every three years. The average home uses ten times or more data on their landline connection today than on their cellphones. It’s hard to foresee the cellphone networks able to close that gap when the amount of landline data use keeps growing so rapidly.

Winners of the CAF II Auction

The FCC CAF II reverse auction recently closed with an award of $1.488 billion to build broadband in rural America. This funding was awarded to 103 recipients that will collect the money over ten years. The funded projects must be 40% complete by the end of three years and 100% complete by the end of six years. The original money slated for the auction was almost $2 billion, but the reverse auction reduced the amount of awards and some census blocks got no bidders.

The FCC claims that 713,176 rural homes will be getting better broadband, but the real number of homes with a benefit from the auction is 513,000 since the auction funded Viasat to provide already-existing satellite broadband to 190,000 homes in the auction.

The FCC claims that 19% of the homes covered by the grants will be offered gigabit speeds, 53% will be offered speeds of at least 100 Mbps and 99.75% will be offered speeds of at least 25 Mbps. These statistics have me scratching my head. The 19% of the homes that will be offered gigabit speeds are obviously going to be getting fiber. I know a number of the winners who will be using the funds to help pay for fiber expansion. I can’t figure what technology accounts for the rest of the 53% of homes that supposedly will be able to get 100 Mbps speeds.

As I look through the filings I note that many of the fixed wireless providers claim that they can serve speeds over 100 Mbps. It’s true that fixed wireless can be used to deliver 100 Mbps speeds. To achieve that speed customers either need to be close to the tower or else a wireless carrier has to dedicate extra resources to that customer to achieve that speed – meaning less of that tower can be used to serve other customers. I’m not aware of any WISPs that offer ubiquitous 100 Mbps speeds, because to do so means serving a relatively small number of customers from a given tower. To be fair to the WISPs, their CAF II filings also say they will be offering slower speeds like 25 Mbps and 50 Mbps. The FCC exaggerated the results of the auction by claiming that any recipient capable of delivering 100 Mbps to a few customers will be delivering it to all customers – something that isn’t true. The fact is that not many of the households over the 19% getting fiber will ever buy 100 Mbps broadband. I know the FCC wants to get credit for improving rural broadband, but there is no reason to hype the results to be better than they are.

I also scratch my head wondering why Viasat was awarded $122 million in the auction. The company is the winner of funding for 190,595 households, or 26.7% of the households covered by the entire auction. Satellite broadband is every rural customer’s last choice for broadband. The latency is so poor on satellite broadband that it can’t be used for any real time applications like watching live video, making a Skype call, connecting to school networks to do homework or for connecting to a corporate WAN to work from home. Why does satellite broadband even qualify for the CAF II funding? Viasat had to fight to get into the auction and their entry was opposed by groups like the American Cable Association. The Viasat satellites are already available to all of the households in the awarded footprint, so this seems like a huge government giveaway that won’t bring any new broadband option to the 190,000 homes.

Overall the outcome of the auction was positive. Over 135,000 rural households will be getting fiber. Another 387,000 homes will be getting broadband of at least 25 Mbps, mostly using fixed wireless, with the remaining 190,000 homes getting the same satellite option they already have today.

It’s easy to compare this to the original CAF II program that gave billions to the big telcos and only required speeds of 10/1 Mbps. That original CAF II program was originally intended to be a reverse auction open to anybody, but at the last minute the FCC gave all of the money to the big telcos. One has to imagine there was a huge amount of lobbying done to achieve that giant giveaway.

Most of the areas covered by the first CAF II program had higher household density than this auction pool, and a reverse auction would have attracted a lot of ISPs willing to invest in faster technologies than the telcos. The results of this auction show that most of those millions of homes would have gotten broadband of at least 25 Mbps instead of the beefed-up DSL or cellular broadband they are getting through the big telcos.

Upgrading Broadband Speeds

A few weeks ago Charter increased my home broadband speeds from 60 Mbps to 130 Mbps with no change in price. My upload speed seems to be unchanged at 10 Mbps. Comcast is in the process of speed upgrades and is increasing base speeds to between 100 Mbps and 200 Mbps download speeds in various markets.

I find it interesting that while the FCC is having discussions about keeping the definition of broadband at 25 Mbps that the big cable companies – these two alone have over 55 million broadband customers – are unilaterally increasing broadband speeds.

These companies aren’t doing this out of the goodness of their hearts, but for business reasons. First, I imagine that this is a push to sharpen the contrast with DSL. There are a number of urban markets where customers can buy 50 Mbps DSL from AT&T and others and this upgrade opens up a clear speed difference between cable broadband and DSL.

However, I think the main reason they are increasing speeds is to keep customers happy. This change was done quietly, so I suspect that most people had no idea that the change was coming. I also suspect that most people don’t regularly do speed tests and won’t know about the speed increase – but many of them will notice better performance.

One of the biggest home broadband issues is inadequate WiFi, with out-of-date routers or poor router placement degrading broadband performance. Pushing faster speeds into the house can overcome some of these WiFi issues.

This should be a wake-up call to everybody else in the industry to raise their speeds. There are ISPs and overbuilders all across the country competing against the giant cable companies and they need to immediately upgrade speeds or lose the public relations battle in the market place. Even those who are not competing against these companies need to take heed, because any web search is going to show consumers that 100 Mbps broadband or greater is now the new standard.

These unilateral changes make a mockery of the FCC. It’s ridiculous to be having discussions about setting the definition of broadband at 25 Mbps when the two biggest ISPs in the country have base product speeds 5 to 8 times faster than that. States with broadband grant programs also have the speed conversation and this will hopefully alert them that the new goal for broadband needs to be at least 100 Mbps.

These speed increases were inevitable. We’ve known for decades that the home demand for broadband has been doubling every three years. When the FCC first started talking about 25 Mbps as the definition of acceptable broadband, the math said that within six years we’d be having the same discussion about 100 Mbps broadband – and here we are having that discussion.

The FCC doesn’t want to recognize the speed realities in the world because they are required by law to try to bring rural speeds to be par with urban speeds. But this can’t be ignored because these speed increases are not just for bragging rights. We know that consumers find ways to fill faster data pipes. Just two years ago I saw articles wondering if there was going to be any market for 4K video. Today, that’s the first thing offered to me on both Amazon Prime and Netflix. They shoot all new programming in 4K and offer it at the top of their menus. It’s been reported that at the next CES electronics shows there will be several companies pushing commercially available 8K televisions. This technology is going to require a broadband connection between 60 Mbps and 100 Mbps depending upon the level of screen action. People are going to buy these sets and then demand programming to use them – and somebody will create the programming.

8K video is not the end game. Numerous companies are working on virtual presence where we will finally be able to converse with a hologram of somebody as if they were in the same room. Early versions of this technology, which ought to be available soon will probably use the same range of bandwidth as 8K video, but I’ve been reading about near-future technologies that will produce realistic holograms and that might require as much as a 700 Mbps connection – perhaps the first real need for gigabit broadband.

While improving urban data speeds is great, every increase in urban broadband speeds highlights the poor condition of rural broadband. While urban homes are getting 130 – 200 Mbps for decent prices there are still millions of homes with either no broadband or with broadband at speeds of 10 Mbps or less. The gap between urban and rural broadband is growing wider every year.

If you’ve been reading this blog you know I don’t say a lot of good things about the big cable companies. But kudos to Comcast and Charter for unilaterally increasing broadband speeds. Their actions speak louder than anything that we can expect out of the FCC.

Subsidizing Rural Broadband

In a rare joint undertaking involving the big and small telcos, the trade groups USTelecom and NTCA—The Rural Broadband Association sponsored a whitepaper titled, Rural Broadband Economics: A Review of Rural Subsidies.

The paper describes why it’s expensive to build broadband networks in rural areas, with high costs mostly driven by low customer density. This is something that is largely universally understood, but this describes the situation for politicians and others who might not be familiar with our industry.

The paper goes on to describe how other kinds of public infrastructure – such roads, electric grids, water and natural gas systems – deal with the higher costs in rural areas. Both natural gas and water systems share the same characteristics as cable TV networks in this country and they are rarely constructed in rural areas. Rural customers must use alternatives like wells for water or propane instead of natural gas.

The electric grid is the most analogous to the historic telephone network in the country. The government decided that everybody should be connected to the electric grid, and various kinds of government subsidies have been used to help pay for rural electric systems. Where the bigger commercial companies wouldn’t build a number of rural electric cooperatives and municipal electric companies filled the gap. The federal government developed subsidy programs, such as low-cost loans to help construct and maintain the rural electric grids. There was no attempt to create universal electric rates across the country and areas lucky enough to have hydroelectric power have electric rates that are significantly lower than regions with more expensive methods of power generation.

Roads are the ultimate example of government subsidies for infrastructure. There are both federal and state fuel taxes used to fund roads. Since most drivers live in urban areas, their fuel taxes heavily subsidize rural roads.

The paper explains that there are only a few alternatives to fund rural infrastructure:

  • Charge higher rates to account for the higher costs of operating in rural areas. This is why small town water rates are often higher than rates in larger towns in the same region.
  • Don’t build the infrastructure since it’s too expensive. This is seen everywhere when cable TV networks, natural gas distribution and water and sewer systems are rarely built outside of towns.
  • Finally, rural infrastructure can be built using subsidies of some kind.

Subsidies can come from several different sources:

  • Cross-subsidies within the same firm. For example, telephone regulators long ago accepted the idea that businesses rates should be set higher to subsidize residential rates.
  • Cross subsidies between firms. An example would be access rates charged to long distance carriers that were used for many years to subsidize local telephone companies. There are also a number of electric companies that have subsidized the creation of broadband networks using profits from the electric business.
  • Philanthropic donations. This happens to a small extent. For example, I recently heard where Microsoft had contributed money to help build fiber to a small town.
  • Government subsidies. There have been a wide range of these in the telecom industry, with the latest big ones being the CAF II grants that contribute towards building rural broadband.

Interestingly the paper doesn’t draw many strong conclusions other than to say that rural broadband will require government subsidies of some kind. It concludes that other kinds of subsidies are not reasonably available.

I suspect there are no policy recommendations in the paper because the small and large companies probably have a different vision of rural broadband subsidies. This paper is more generic and serves to define how subsidies function and to compare broadband subsidies to other kinds of infrastructure.

Optical Loss on Fiber

One issue that isn’t much understood except by engineers and fiber technicians is optical loss on fiber. While fiber is an incredibly efficient media for transmitting signals there are still factors that cause the signal to degrade. In new fiber routes these factors are usually minor, but over time problems with fiber accumulate. We’re now seeing some of the long-haul fibers from the 1980s go bad due to accumulated optical signal losses.

Optical signal loss is described as attenuation. Attenuation is a reduction in the power and clarity of a light signal that diminishes the ability of a receiving laser to demodulate the data being received. Any factor that degrades the optical signal is said to increase the attenuation.

Engineers describe several kinds of phenomenon that can degrade a fiber signal:

  • Chromatic Dispersion. This is the phenomenon where a signal gets distorted over distance as the different frequencies of light travel at different speeds. Lasers don’t generally create only one light frequency, but a range of slightly different colors, and different colors of light travel through the fiber at slightly different speeds. This is one of the primary factors that limits the distance that a fiber signal can be sent without needing to pass through a repeater to restart and synchronize all of the separate light paths. More expensive lasers can generate purer light signals and can transmit further. These better lasers are used on long haul fiber routes that might go 60 miles between repeaters while FTTH networks aren’t recommended to travel more than 10 miles.
  • Modal Dispersion. Some fibers are designed to have slightly different paths for the light signal and are called multimode fibers. A fiber system can transmit different date paths through the separate modes. A good analogy for the modes is to think of them as separate tubes inside of a conduit. But these are not physically separated paths and the modes are created by having different parts of the fiber strand to be made of a slightly different glass material. Modal dispersion comes from the light traveling at slightly different speeds through the different modes.
  • Insertion Loss. This is loss of signal that happens when the light signal moves from one media to another. Insertion losses occurs at splice points, where fiber passes through a connector, or when the signal is regenerated through a repeater or other device sitting in the fiber path.
  • Return Loss. This is the lost of signal due to interference caused when some parts of the light are reflected backwards in the fiber. While the glass used in fiber is clear, it’s never perfect and some photons are reflected backwards and interfere with oncoming light signals.

Fiber signal loss can be measured with test equipment that measure the delay in a fiber signal compared to an ideal signal. The losses are expressed in decibels (dB).  New fiber networks are designed with a low total dB loss so that there is headroom over time to accommodate natural damage and degradation. Engineers are able to calculate the amount of loss that can be expected for a signal traveling through a fiber network – called a loss budget. For example, they know that a fiber signal will degrade some specific amount, say 1 dB just from passing through a certain type of fiber. They might expect a loss of 0.3 dB for each splice along a fiber and 0.75 dB when a fiber passes through a connector.

The biggest signal losses on fiber generally come at the end of a fiber path at the customer premise. Flaws like bends or crimps in the fiber might increase return loss. Going through multiple splices increases the insertion loss. Good installation practices are by far the most important factor in minimizing attenuation and providing for a longer life for a given fiber path.

Network engineers also understand that over time that fibers degrade, Fibers might get cut and have to be re-spliced. Connectors get loose and don’t make perfect light connections. Fiber can expand and shrink from temperature extremes and create more reflection. Tiny manufacturing flaws like microscopic cracks will grow over time and create opacity and disperse the light signal.

This is not all bad news and modern fiber electronics allow for a fairly high level of dB loss before the fiber loses functionality. A fiber installed properly, using quality connections and with good splices can last a long time.

Modernizing CPNI Rules

I think we badly need new CPNI rules for the industry. CPNI stands for ‘Customer Proprietary Network Information’ and are rules to govern the use of data that telcos and ISPs gather on their customers. CPNI rules are regulated by the FCC and I think it’s fully within their current mandate to update the rules to fit the modern world.

While CPNI is related to privacy issues it’s not exactly the same. CPNI rules involve how ISPs use the customer data that they must gather in order to make the network operate. Originally CPNI rules involved telephone call details – who we called, who called us, etc. Telcos have been prohibited by CPNI rules from using this kind of data without the express consent of a consumer (or else in response to a valid subpoena from law enforcement).

Today the telcos and ISPs gather a lot more information about us than just telephone calling information. For instance, a cellular company not only knows all of your call details, but they know where you are whenever you call, text or make a data connection from your cellphone. Every ISP knows every web search you make since they are the ones routing those requests to the Internet. If you buy newer ISP products like home automation they know all sorts of details that they can gather from monitoring motion detectors and other devices that are part of their service.

Such CPNI data is valuable because it can be used by the ISP to assemble a profile of each customer, particularly when CPNI data is matched with data gathered from other sources. Every large ISP has purchased a business arm that is aimed to help them monetize customer data. The ISPs are all envious of the huge advertising revenues generated by Facebook and Google and want to climb into the advertising game.

The FCC was given the authority to limit how carriers use customer proprietary data, granted by Section 222(b) of the Telecommunications Act of 1934. Those statutes specifically prohibit carriers from using CPNI data for marketing purposes. Over the years the FCC developed more specific CPNI rules that governed telcos. However, the FCC has not updated the specific CPNI rules to cover the wide range of data that ISPs gather on us today. Telcos still ask customers for permission to use their telephone records, but they are not required to get customer permission to track web sites we visit or our location when using a cellphone.

The FCC could invoke CPNI protections for companies that they regulate. It gets dicier for the FCC to expand CPNI rules past traditional carriers. All sorts of web companies also gather information on users. Google makes most of their money through their search engine. They not only charge companies to get higher ranking for Google searches, but they monetize customer data by building profiles of each user that they can market to advertisers. These profiles are supposedly very specific – they can direct advertisers to users who have searched for any specific topic, be it people searching for information about diabetes or those looking to buy a new truck.

There are many who argue that companies like Google should be brought under the same umbrella of rules as ISPs. The ISPs rightfully claim that companies like Google have a major market advantage. But the ISPs clearly prefer the regulatory world where no company is subject to CPNI rules.

There other web applications that are harder to justify as being related to CPNI. For example, a social network like Facebook gathers huge amounts of private data about its users – but those users voluntarily build profiles and share that data freely.

There are more complicated cases such as Amazon, which has been accused of using customer shopping data to develop its own product lines to directly compete with vendors selling on the Amazon platform. The company clearly uses customer data for their own marketing purposes – but Amazon is clearly not a carrier and it would be a huge stretch to pull them under the CPNI rules.

It’s likely that platforms like Facebook or Amazon would have to be regulated with new privacy rules rather than with CPNI rules. That requires an act of Congress, and it’s likely that any new privacy rules would apply to a whole large range of companies that use the web – the approach taken by the European Union.