Terahertz WiFi

While labs across the world are busy figuring out how to implement the 5G standards there are scientists already working in the higher frequency spectrum looking to achieve even faster speeds. The frequencies that are just now being explored are labeled as the terahertz range and are at 300 GHz and higher spectrum. This spectrum is the upper ranges of radio spectrum and lies just below ultraviolet light.

Research in these frequencies started around 2010, and since then the achieved broadband transmission speeds have progressed steadily. The first big announced breakthrough in the spectrum came in 2016 when scientists at the Tokyo Institute of Technology achieved speeds of 34 Gbps using the WiFi standard and the 500 GHz spectrum range.

In 2017, researchers at Brown University School of Engineering were able to achieve 50 Gbps. Later that year a team of scientists from Hiroshima University, the National Institute of Information and Communications Technology and Panasonic Corporation achieved a speed of 105 Gbps. This team has also subsequently developed a transceiver chip that can send and receive data at 80 Gbps – meaning these faster speeds could be moved out of the lab and into production.

Like with all frequencies, when transmitted through the air, the higher the bandwidth the shorter the distance until a radio transmission scatters. That makes the biggest challenge for using these frequencies the short transmission distances. However, several of the research teams have shown that transmissions perform well when bounced off walls and the hope is to eventually achieve distances as long as 10 meters (30 feet).

The real benefit of superfast bandwidth will likely be for super-short distances. One of the uses of these frequencies could be to beam data into computer processors. One of the biggest impediments to faster computing is the physical act of getting data to where it’s needed on time, and terahertz lasers could be used to speed up chips.

Another promising use of the faster lasers is to create faster transmission paths on fibers. Scientists have already been experimenting and it looks like these frequencies can be channeled through extremely thin fibers to achieve speeds much faster than anything available today. Putting this application into the field is probably a decade or more away – but it’s a breakthrough that’s needed. Network engineers have already been predicting that we will exhaust the capabilities of current fiber technology on the major Internet transmission paths between major POPs. As the volume of bandwidth we use keeps doubling we will be transmitting more data in a decade or two between places like New York and Washington DC than all of the existing fibers can theoretically carry. When fiber routes get that full the problem can’t be easily fixed by adding more fibers – not when the volumes double every few years. We need solutions that involve fitting more data into existing fibers.

There are other applications that could use higher frequencies today. For example, there are bandwidth needs for specific applications like real-time medical imaging and real-time processing for intricate chemical engineering that need faster bandwidth that is possible with 5G. The automated factories that will create genetic-based drug solutions will need much faster bandwidth. There are other more mundane uses of the higher frequencies. For example, these frequencies could be used to replace X-rays and reduce radiation risks in doctor’s offices and airports.

No matter what else the higher frequencies can achieve, I’m holding out for Star Trek holodecks. The faster terahertz frequencies could support creation of the complex real-time images involved in truly immersive entertainment.

These frequencies will become the workhorse for 6G, the next generation of wireless technology. The early stages of developing a 6g standard is underway with expectations of having a standard by perhaps 2030. Of course, the hype for 6G has also already begun. I’ve already seen several tech articles that talk about the potential for having ultrafast cellular service using these frequencies. The authors of these articles don’t seem to grasp that we’d need a cell site every twenty feet – but facts don’t seem to get in the way of good wireless hype.

What’s the Future for PEG Channels?

Last September in Docket 05-0311 the FCC proposed changes to cable regulations that could threaten the continued existence of PEG channels. The term PEG channel refers to Public, Educational and Government and refers to channel slots given to local governments and school systems for programming. Local governments routinely broadcast government meetings on these channels. In many areas, the channels are used during the day by the school system. PEG channels have a particularly important role during local emergencies and often become the only source of local information during floods, hurricanes, or big fires.

PEG channels came into existence during the negotiations for cable franchises in the 1970s and 1980s. Many, but not all cities asked for a channel slot to be used to present important local content to subscribers. Cities have routinely used the channels to broadcast city council meetings, while some cities go much further and broadcast a wide array of public meetings. Like most consultants who work with cities, I’ve been broadcast on local PEG channels hundreds of times (and if that didn’t break the system, I don’t know what would!)

The proposed FCC rule change would allow cable companies to put a value on in-kind contributions required by franchise agreements and deduct those from the amounts paid for franchise fees. In addition to providing PEG channels, other in-kind contributions might include things like free broadband or cable TV for city offices, and broadband connections between government locations (often referred to as an I-Net).

To be fair to the FCC, the proposed rules are considering excluding PEG channels from the list of -in-kind contributions – but that exclusion is no sure thing. Like most FCC dockets this one has not guaranteed decision date and could be decided at any time.

Numerous local and federal politicians have commented on the docket and are begging the FCC to not kill PEG channels. They figure, probably correctly, that the cable companies will place a too-high value on the local channel slot as a way to lower their costs.

One of the interesting things about this docket is that cable companies don’t pay franchise fees – these are invariably passed on to customers and added to customer bills. Cable companies have argued for many years that they are at a disadvantage because their customers pay the franchise fees – often set at levels between 3% and 5% – but these fees don’t apply to satellite broadband or to the newer online programming.

If the FCC docket passes, and if the in-kind contributions apply to PEG channels then local governments will face a dilemma. Franchise fees today go straight into general city coffers in most cities. These fees have been steadily dropping in recent years due to cord cutting, and as cable customers leave a cable company the franchise fees drop accordingly.

Cities would be faced with covering the cost of in-kind contributions, funded directly out of the franchise fees they’ve collected for the last 40-50 years. This would pit different parts of local government against each other – should a city accept lower franchise fees or else kill the PEG channels? Most cities know that PEG channels are the only way that many citizens have of following the actions of local governments. I’ve visited some cities where a significant proportion of the community watch city council meetings, particularly when there is a topic of big local interest. Cities could probably live-stream council meetings to the web – but they understand that not everybody has broadband access, particularly in rural communities. It’s much easier for citizens to follow local government if the meetings are routinely rotated on a PEG channel.

Cities also face the loss of other in-kind contributions, although those have been largely going away in recent years as franchise agreements come up for renegotiation. There was a time when the cable companies provided a free or subsidized broadband connection between city buildings as part of an I-Net. While there are many I-Nets still in existence, many have been discontinued, or no longer offered for free. In such cases, cities either pay for the broadband connections or builds fiber to connect city buildings.

In the long run, this change is probably inevitable. While nationwide cable penetrations were still at around 70% at the end of last year, the rate of cord cutting seems to be accelerating. The most current snapshot of cord cutting shows a rate of customer loss equal to 3% of market share annually. Assuming that traditional cable TV follows the path of landline voice service, the amount of franchise fees – and even the requirement to have a franchise is diminished. At some point, if Congress ever passes another telecom act, they will probably consider deregulating both cable TV and landline voice.

Cellular Broadband Speeds – 2019

Opensignal recently released their latest report on worldwide cellular data speeds. The company examined over 139 billion cellphone connections in 87 countries in creating this latest report.

South Korea continues to have the fastest cellular coverage in the world with an average download speed of 52.4 Mbps. Norway is second at 48.2 Mbps and Canada third at 42.5 Mbps. The US was far down the list in 30th place with an average download speed of 21.3 Mbps. Our other neighbor Mexico had an average download speed of 14.9 Mbps. At the bottom of the list are Iraq (1.6 Mbps), Algeria (2.1 Mbps) and Nepal (4.4 Mbps). Note that these average speeds represent all types of cellular data connections including 2G and 3G.

Cellular broadband speeds have been improving raoidly in most countries. For instance, in the 2017 report, Opensignal showed South Korea at 37.5 Mbps and Norway at 34.8 Mbps. The US in 2017 was in 36th place at only 12.5 Mbps.

Earlier this year Opensignal released their detailed report about the state of mobile broadband in the United States. This report looks at speeds by carrier and also by major metropolitan area. The US cellular carriers have made big strides just since 2017. The following table compares download speeds for 4G LTE by US carrier for 2017 and 2019.

2019 2017
Download Latency Download Latency
AT&T 17.8 Mbps 57.8 ms 12.9 Mbps 63.8 ms
Sprint 13.9 Mbps 70.0 ms 9.8 Mbps 70.1 ms
T-Mobile 21.1 Mbps 60.6 ms 17.5 Mbps 62.8 ms
Verizon 20.9 Mbps 62.6 ms 14.9 Mbps 67.3 ms

Speeds are up across the board. Sprint increased speeds over the two years by 40%. Latency for 4G is still relatively high. For comparison, fiber-to-the-home networks have latency in the range of 10 ms and coaxial cable networks have latency between 25 – 40 ms. The poor latency in cellular networks is one of the reasons why browsing the web on a cellphone seems so slow. (the other reason is that cellphone browsers focus on graphics rather than speed).

Cellular upload speeds are still slow. In the 2019 tests, the average upload speeds were AT&T (4.6 Mbps), Sprint (2.4 Mbps), T-Mobile (6.7 Mbps) and Verizon (7.0 Mbps).

Speeds vary widely by carrier and city. The fastest cellular broadband market identified in the 2019 tests was T-Mobile in Grand Rapids, Michigan with an average 4G speed of 38.3 Mbps. The fastest upload speed was provided by Verizon in New York City at 12.5 Mbps. Speeds vary by market for several reasons. First, the carriers don’t deploy the same spectrum everywhere in the US, so some markets have less spectrum than others. Markets vary in speed due to the state of upgrades – at any given time cell sites are at different levels of software and hardware upgrades. Finally, markets also vary by cell tower density and markets that serve more customers for each tower are likely to be slower.

Many people routinely take speed tests for their home landline broadband connection. If you’ve not taken a cellular speed test it’s an interesting experience. I’ve always found that speeds vary significantly with each speed test, even when run back-to-back As I was writing this blog I took several speed tests that varied in download speeds between 12 Mbps and 23 Mbps (I use AT&T). My upload speeds also varied with a top speed of 3 Mbps, and one test that couldn’t maintain the upload connection and measured 0.1 Mbps on the test. While landlines broadband connections maintain a steady connection to an ISP, a cellphone establishes a new connection every time you try to download and speeds can vary depending upon the cell site and the channel your phone connects to and the overall traffic at the cell site at the time of connection. Cellular speeds can also be affected by temperature, precipitation and all of those factors that make wireless coverage a bit squirrelly.

It’s going to be a few years until we see any impact on the speed test results from 5G. As you can see by comparing to other countries, the US still has a long way to go to bring 4G networks up to snuff. One of the most interesting aspects of 5G is that speed tests might lose some of their importance. With frequency slicing, a cell site will size a data channel to meet a specific customer need. Somebody downloading a large software update should be assigned a bigger data channel with 5G than somebody who’s just keeping up with sports scores. It will be interesting to see how Opensignal accounts for data slicing.

Should Satellite Broadband be Subsidized?

I don’t get surprised very often in this industry, but I must admit that I was surprised by the amount of money awarded for satellite broadband in the reverse auction for CAF II earlier this year. Viasat, Inc., which markets as Exede, was the fourth largest winner, collecting $122.5 million in the auction.

I understand how Viasat won – it’s largely a function of the way that reverse auctions work. In a reverse auction, each bidder lowers the amount of their bid in successive rounds until only one bidder is left in any competitive situation. The whole pool of bids is then adjusted to meet the available funds, which could mean an additional reduction of what winning bidders finally receive.

Satellite providers, by definition, have a huge unfair advantage over every other broadband technology. Viasat was already in the process of launching new satellites – and they would have launched them with or without the FCC grant money. Because of that, there is no grant level too low for them to accept out of the grant process – they would gladly accept getting only 1% of what they initially requested. A satellite company can simply outlast any other bidder in the auction.

This is particularly galling since Viasat delivers what the market has already deemed to be inferior broadband. The download speeds are fast enough to satisfy the reverse auction at speeds of at least 12 Mbps. The other current satellite provider HughesNet offer speeds of at least 25 Mbps. The two issues that customers have with satellite broadband is the latency and the data caps.

By definition, the latency for a satellite at a 23,000 orbit is at least 476 ms (milliseconds) just to account for the distance traveled to and from the earth. Actual latency is often above 600 ms. The rule of thumb is that real-time applications like VoIP, gaming, or holding a connection at a corporate LAN start having problems when latency is greater than 100-150 ms.

Exede no longer cuts customers dead for the month once they reach the data cap, but they instead reduce speeds when the network is busy for any customer over the cap. Customer reviews say this can be extremely slow during prime times. The monthly data caps are small and range from $49.99 monthly for a 10 GB data cap to $99.95 per month for a 150 GB data cap. To put those caps into perspective, OpenVault recently reported that the average landline broadband household used 273.5 GB per month of data in the first quarter of 2019.

Viasat has to be thrilled with the result of the reverse auction. They got $122.5 million for something they were already doing. The grant money isn’t bringing any new option to customers who were already free to buy these products before the auction. There is no better way to say it other than Viasat got free money due to a loophole in the grant process. I don’t think they should have been allowed into the auction since they aren’t bringing any broadband that is not already available.

The bigger future issue is if the new low-earth orbit satellite companies will qualify for the future FCC grants, such as the $20.4 billion grant program starting in 2021. The new grant programs are also likely to be reverse auctions. There is no doubt that Jeff Bezos or Elon Musk will gladly take government grant money, and there is no doubt that they can underbid any landline ISP in a reverse auction.

For now, we don’t know anything about the speeds that will be offered by the new satellites. We know that they claim that latency will be about the same as cable TV networks at about 25 ms. We don’t know about data plans and data caps, although Elon Musk has hinted at having unlimited data plans – we’ll have to wait to see what is actually offered.

It would be a tragedy for rural broadband if the new (and old) satellite companies were to win any substantial amount of the new grant money. To be fair, the new low-orbit satellite networks are expensive to launch, with price tags for each of the three providers estimated to be in the range of $10 billion. But these companies are using these satellites worldwide and will be launching them with or without help from an FCC subsidy. Rural customers are going to best be served in the long run by having somebody build a network in their neighborhood. It’s the icing on the cake if they are also able to buy satellite broadband.

Are You Ready for 10 Gbps?

Around the world, we’re seeing some migration to 10 Gbps residential broadband. During the last year the broadband providers in South Korea, Japan, and China began upgrading to the next-generation PON and are offering the blazingly fast broadband products to consumers. South Korea is leading the pack and expects to have the 10 Gbps speed to about 50% of subscribers by the end of 2022.

In the US there are a handful of ISPs offering a 10 Gbps product, mostly for the publicity – but they stand ready to install the faster product. Notable is Fibrant in Salisbury, NC and EPB in Chattanooga. EPB which was also among the first to offer a 1 Gbps residential product a few years ago.

I have a lot of clients who already offer 10 Gbps connections to large business and carrier customers to serve large businessesn like data centers and hospital complexes. However, except for the few pioneers, these larger bandwidth products are being delivered directly to a single customer using active Ethernet technology.

There are a few hurdles for offering speeds over a gigabit in the US. Perhaps foremost is that there are no off-the-shelf customer electronics that can handle speeds over a gigabit – the typical WiFi routers and computers work at slower speeds. The biggest hurdle for an ISP continues to be the cost of the electronics. Today the cost of next-generation PON equipment is high and will remain so until the volume of sales brings the per-unit prices down. The industry market research firm Ovum predicts that we’ll see wide-spread 10 Gbps consumer products starting in 2020 but not gaining traction until 2024.

In China, Huawei leads the pack. The company has a 10 Gbps PON system that is integrated with a 6 Gbps WiFi 6 router for the home. The system is an easy and overlay on top of the company’s traditional GPON network gear. In South Korea the largest ISP SK Broadband has worked with Nokia to develop a proprietary PON technology only used today inside of South Korea. Like Huawei, this overlays onto the existing GPON network. In Japan the 10 Gbps PON network is powered by Sumitomo, a technology only being sold in Japan. None of these technologies has made a dent in the US market, with Huawei currently banned due to security concerns.

In the US there are two technologies being trialed. AT&T is experimenting with XGS-PON technology. They plan to offer 2 Gbps broadband, upgradable to 10 Gbps in the new high-tech community of Walsh Ranch being built outside of Ft. Worth. AT&T is currently trialing the technology at several locations within its FTTP network that now covers over 12 million passings. Verizon is trying the NG-PON2 technology but is mostly planning to use this to power cell sites. It’s going to hard for any ISP to justify deployment of the new technologies until somebody buys enough units to pull down the cost.

Interestingly, Cable Labs is also working on a DOCSIS upgrade that will allow for faster speeds up to 10 Gbps. The problem most cable networks will have is in finding space of their network for the needed channels to support the faster speeds.

There are already vendors and labs exploring 25 Gbps and 50 Gbps PON. These products will likely be used for backhaul and transport at first. The Chinese vendors think the leap forward should be to 50 Mbps while other vendors are all considering a 25 Mbps upgrade path.

The real question that needs to be answered is if there is any market for 10 Gbps bandwidth outside the normally expected uses like cellular towers, data centers, and large business customers. This same question was asked when EPB at Chattanooga and LUS in Lafayette, Louisiana rolled out the earliest 1 Gbps residential bandwidth. Both companies were a bit surprised when they got a few instant takers for the faster products – in both markets from doctors that wanted to be able to analyze MRIs and other big files at home. There are likely a few customers who need speeds above 1 Gbps, with doctors again being good candidates. Just as broadband speeds have advanced, the medical imaging world has grown more sophisticated in the last decade and is creating huge data files. The ability to download these quickly offsite will be tempting to doctors.

I think we are finally on the verge of seeing data use cases that can eat up most of a gigabit of bandwidth in the residential environment. For example, uncompressed virtual and augmented reality can require masses of downloaded data in nearly real-time. As we start seeing use cases for gigabit speeds, the history of broadband has shown that the need for faster speeds is probably not far behind.

The End of the Central Office?

One of the traditional costs for bringing fiber to a new market has always included the creation of some kind of central office space. This might mean modifying space in an existing building or building a new building or large hut. In years past a central office required a lot of physical space, but we are finally to the point with technology where the need for a big central office is often disappearing.

A traditional central office started with the need to house the fiber terminating electronics that connect the new market to the outside world. There also is the need to house and light the electronics facing the customers – although in some network design configurations some of the customer facing electronics can be housed in remote huts in neighborhoods.

A traditional central office needs room for a lot of other equipment. First is significant space for batteries to provide short-term backup in case of power outages. For safety reasons the batteries are often placed in a separate room. Central offices also need space for the power plant used to make the conversion from AC power to DC power. Central offices also usually need significant air conditioning and need room to house the cooling units. If the fiber network terminating to a central office is large enough there is also the requirement for some kind of fiber management system needed to separate the individual fibers in a neat and sensible way. Finally, if the above needs meant building a large enough space, many ISPs also built space to provide working and office space for technicians.

Lately I’ve seen several fiber deployments that don’t require the large traditional central office space. This is largely due to the evolution of the electronics used for serving customers in a FTTP network. For example, the OLTs (optical line terminations) electronics has been significantly compressed in size and density and a shelf of equipment can now perform the same functions that would have required much of a full rack a decade ago. As that equipment has reduced in size, the power requirements have also dropped, reducing the size of the power plant and the batteries.

I’ve seen several markets where a large cabinet provides enough room to replace what would have required a full central office a decade ago. These are not small towns, and two of the deployments are for towns with populations over 20,000.

As the footprint for the ‘central office’ has decreased there’s been a corresponding drop in costs. There are several supply houses that will now pre-install everything needed into the smaller cabinet / hut and deliver the whole unit complete and ready to go after connecting to power and splicing to fiber.

What I find interesting is that I still see some new markets built in the more traditional way. In that same market of 20,000 people it’s possible to still use a configuration that constructs several huts around the city to house the OLT electronics. For purposes of this blog I’ll refer to that as a distributed configuration.

There are pros and cons to both configurations. The biggest benefit of having one core hut or cabinet is lower cost. That means one pre-fab building instead of having to build huts or cabinets at several sites.

The distributed design also has advantages. A redundant fiber ring can be established with a network consisting of at least three huts, meaning that fewer parts of the market will lose service due to a fiber cut near to the core hub. But the distributed network also means more electronics in the network since there is now the need for electronics to light the fiber ring.

The other advantage of a distributed network is that there are fewer fibers terminating to each hut compared to having all customer fibers terminating to a single hut. The distributed network likely also has smaller fibers in the distribution network since fiber can be sized for a neighborhood rather than for the whole market. That might mean less splicing required during the initial construction.

Anybody building a new fiber network needs to consider these two options. If the market is large enough then the distributed network becomes mandatory. However, many engineers seem to be stuck with the idea that they need multiple huts and a fiber ring even for smaller towns. That means paying a premium price to achieve more safety against customer outages. However, since raising the money to build a fiber network is often the number one business consideration, the ability to save electronics costs can be compelling. It would not be unusual to see the single-hub configuration save half a million dollars or more. There is no configuration that is the right choice for all situations. Just be sure if you’re building FTTP in a new market that you consider the options.

Open Access for Apartment Buildings

San Francisco recently passed an interesting ordinance that requires that landlords of apartments and multi-tenant business buildings allow access to multiple ISPs to bring broadband. This ordinance raises all sorts of regulatory and legal questions. At the most recent FCC monthly meeting the FCC jumped into the fray and voted on language that is intended to kill or weaken the ordinance.

The FCC’s ruling says that a new ISP can’t share wiring that is already being used by an existing broadband provider. I call this an odd ruling because there are very few technologies that share wires between competitors – with most fast broadband technologies a new ISP must rewire the building or beam broadband wirelessly. This means the FCC’s prohibition might not make much of a difference in terms of overturning the San Francisco ordinance. The only competitive broadband technology that routinely uses an existing wire is G.Fast, and even that can only be used by one broadband provider at a time and not shared. I can’t think of any examples of a practical impact of the FCC ruling.

The FCC’s ruling is odd for a number of other reasons. It’s certainly out of the ordinary for a federal agency to react directly to a local ordinance. My guess is that the FCC knows that many other cities are likely to jump onto the open access bandwagon. Cities are getting a lot of complaints from apartment tenants who don’t have access to the same broadband options as single family homes.

The FCC ruling is also unusual because it violates the FCC’s overall directive from Congress to be pro-competition. The FCC order clearly falls on the side of being anti-competitive.

What I find most striking about this decision is that this FCC gave up authority to regulate broadband when they killed Title II regulation last year. I guess what they meant was that they are giving up regulating broadband except when it suits them to regulate anyway. It’s an interesting question if the agency still has the authority to make this kind of ruling. It’s likely this lack of regulatory authority that forced the FCC to make such a narrow ruling instead of just overturning the San Francisco ordinance. I always knew it wouldn’t be long before the FCC selectively wanted back some of their former Title II authority.

The MDU market has an interesting history. Historically the large apartment buildings were served by the incumbent providers. The incumbents often stealthily gave themselves exclusive rights to serve apartments through deceptive contractual practices, and the FCC prohibited some of the most egregious abuses.

For many years competitors largely weren’t interested in apartments because the cost of rewiring most building was prohibitive. In the last few years the MDU market has changed significantly. There are now wiring and wireless technologies that make it more affordable to serve many large apartment buildings. There are now numerous competitors operating in the space. Many of them bring a suite of services far beyond the triple play and also bring security, smart camera solutions to make tenants feel safe, smart sensors of various kinds, and WiFi in places like hallways, stairwells, parking garages and outside. These new competitors often require an exclusive contract with a landlord as a way to help cover the cost of bringing the many ancillary services.

There is another regulatory issue to consider. There have been several laws from Congress that have been tested in the courts that give building owners the right to keep ISPs off their premise – this applies to single family homes as well as the largest apartment buildings. It won’t be surprising to see building owners suing the City for violating their property rights.

Yet another issue that muddies the water is that landlords often elect to act as the ISP and to build broadband and other services into the rent. Does the San Francisco ordinance prohibit this practice since it’s hard for any ISP to compete with ‘free’ service.

Another area affected by the ordinance might best be described as aesthetics. Landlords often have stringent rules like requiring that ISPs hide wiring, electronics boxes, and outdoor enclosures or huts. It’s a bit ironic that the City of San Francisco would force building owners to allow in multiple ISPs and the myriad wires and boxes that come with open access. San Francisco recently got a positive court ruling saying that aesthetics can be considered for small cell deployments and it seems odd in MDUs that the City is favoring competition over aesthetics.

At the end of the day I think the City might be sorry that they insinuated themselves into an extremely complicated environment. There are likely dozens of different relationships today between landlords and ISPs and it seems like a slippery slope to try to force all apartment owners to offer open access.

I know cities have been struggling with the open access issue. They receive complaints from apartment tenants who want different broadband options. It’s not hard to understand why a city with a lot of apartment dwellers might feel compelled to tackle this issue. I know other cities that have considered ordinances like the San Francisco one and abandoned the issue once they understood the complexity.

The City made an interesting choice with the ordinance. The City elected to require open access to help foster consumer choice. However, it’s possible that the long-term results might not be what the City expected and the ruling could drive away the creative ISPs who elect not to compete in an open access environment.

It seemed almost inevitable that the City ordinance will be challenged by somebody – but the courts were a more logical place to fight this battle than at the FCC. If anything, the FCC has just clouded the issue by layering on a toothless prohibition against the sharing of wires.

FCC Looks to Kill Copper Unbundling

FCC Chairman Ajit Pai circulated a draft order that would start the process of killing the unbundling of copper facilities. This unbundling was originally ordered with the Telecommunications Act of 1996, and unleashed telephone and broadband competition in the US. This new law was implemented before the introduction of DSL and newly formed competitors (CLECs) were able to use telco copper to compete for voice and data service using T1s. The 1996 Act also required that the big telcos offer their most basic products for resale.

The FCC noted that their proposed order will “not grant forbearance from regulatory obligations governing broadband networks”, meaning they are not going to fully eliminate the requirement for copper unbundling. This is because the FCC doesn’t have the authority to fully eliminate unbundling since the obligation was required by Congress –  the FCC is mandated to obey that law until it’s either changed by Congress or until there is no more copper left to unbundle. Much of the industry has been calling for an updated telecommunications act for years, but in the current dysfunction politics of Washington DC that doesn’t look likely.

The big telcos have hated the unbundling requirement since the day it was passed. Eliminating this requirement has been near the top of their regulatory wish list since 1996. The big telcos hate of unbundling is somewhat irrational since in today’s environment unbundling likely makes them money. There are still CLECs selling DSL from unbundled copper and generating monies for the telcos that they’d likely not have otherwise. But the hatred for the original ruling has become ingrained in the big telco culture.

The FCC’s proposal is to have a three year transition from the currently mandated rates that are set at incremental costs to some market-based leased rate. I guess we’ll have to see during that transition if the telcos plan to price CLECs out of the market or if they will offer reasonable lease rates that will continue to offer connections.

This change has the possibility of causing harm to CLECs and consumers. There are still a number of CLECs selling DSL over unbundled copper elements. In many cases these CLECs operate the newest DSL electronics and can offer faster data speeds than the telco DSL. It’s not unusual for CLECs to have 50 Mbps residential DSL. For businesses they can now combine multiple pairs of copper and I’ve seen unbundled DSL products for businesses as fast as 500 Mbps.

There are still a lot of customer that are choosing to stay with DSL. Some of these customers don’t feel the need for faster data speeds. In other cases it’s due to the fact that DSL is generally priced to be cheaper than cable modem products. At CCG we do surveys and it’s not unusual to find anywhere from 25% to 45% of the customers still buying DSL in a market that has a cable competitor. While there are millions of customers annually making the transition to cable modem service, there are still big numbers of households still using DSL – it’s many years away from dying.

There is another quieter use of unbundled copper that still has competitors worried. Any competitor that offers voice service using their own switch is still required by law to interconnect to the local incumbent telcos. Most of that interconnection is done today using fiber transport, but there still is a significant impact from unbundled elements.

Surprisingly, the vast majority of the public switched telecommunications network (PSTN) still uses technology based upon T1s. There was a huge noise made 5 – 10 years ago about having a ‘digital transition’ where the interconnection network was going to migrate to 100% IP. But for the most part this transition never occurred. Competitors can still bring fiber to meet an incumbent telco network, but that fiber signal must still be muxed down to T1 channels using T1s and DS3. The pricing for those interconnections are part of the same rules the FCC wants to kill. CLECs everywhere are going to be worried about seeing huge price increases for the interconnection process.

The big telcos have always wanted interconnection to be done at tariffed special access rates. These are the rates that often had a T1 (1.5 Mbps connection) priced at $700 per month. The unbundled cost for an interconnection T1 is $100 or less in most places and competitors are going to worry about seeing a big price increase to tie their network to telco tandems.

It’s not surprising to see this FCC doing this. They have been checking off the regulatory wish list of the telcos and the cable companies since Chairman Pai took over leadership. This is one of those regulatory issues that the big telcos hate as a policy issue, but which has quietly been operationally working well now for decades. There’s no pressing reason for the FCC to make this change. Copper is naturally dying over time and the issue eventually dies with the copper. There are direct measurable benefits to consumers from unbundling, so the real losers are going to be customers who lose DSL connections they are happy with.

Consider Rural Health Care Funding

One of the sources of the Universal Service Fund that often is forgotten is the Rural Health Care Program. The FCC recently carried forward $83.2 million that was unspent in 2018 into the 2019 funding pool. In June Chairman Ajit Pai proposed to raise the annual cap on this fund from $400 million to $571 million. That’s where this fund would have been today had the original fund been indexed by inflation since it was started in 1997. He also proposes that the cap on this Fund grow by inflation in the future.

I have a lot of clients who help their customers benefit from the Schools and Libraries Fund, but many of them never think about doing the same thing with the Rural Health Care Fund.

The Rural Health Care Program provides funding to eligible health care providers for broadband and voice services. Eligible health care providers must be either a public or a non-profit entity. The funds can be used for entities such as 1) educational institutions offering post-secondary medical instruction, teaching hospitals and medical schools; 2) community health centers providing care to migrants; 3) local health departments; 4) community mental health centers; 5) non-profit hospitals; 6) rural health clinics; 7) skilled nursing facilities; and 8) consortiums of providers that include one or more of the preceding list.

This program is comprised of two programs: The Healthcare Connect Fund Program and the Telecommunications Program. The Healthcare Connect Program provides support for high-speed broadband connections. Eligible entities can receive as much as a 65% discount on monthly broadband bills for services like Internet access, dark fiber, or traditional telco data services. This works a lot like the E-Rate program for Schools and Library program. The health care facility pays the reduced rate for service and the partner ISP can collect the discount from the Universal Service Fund.

The health care providers can also ask for assistance with telecommunications equipment and can use the funds to help pay for the construction of fiber facilities. This funding can be an interesting way for a rural ISP to get some assistance for paying for a fiber route to reach a health care facility (and then use that fiber to also serve other customers).

The Telecommunications Program works a little differently. In that program the health care facility can buy broadband and telecommunication services at rates that are reasonably comparable to rates charged for similar services in nearby urban areas. That’s likely to mean discounts smaller than the 65% in the Healthcare Connect program. Functionally this still works the same and the ISP can collect the difference between the urban rates and the rural rates.

Just like with E-Rate, the health care provider must apply for this funding. But also like E-Rate, it’s typical for an ISP to help prepare the paperwork. The paperwork will feel familiar to any ISP already participating in an E-Rate situation.

It’s obvious that since $83.2 million is being carried over from 2018 that rural health care providers are not all taking full advantage of this program. I see articles all of the time decrying a crisis in rural health care due to the high costs of providing services in rural America. This program can bring subsidized broadband connection to health care facilities at a time when that is likely a welcome relief.

This funding has been available for a long time, yet I rarely hear clients talking about it. I’m guessing most rural ISPs have never participated although there are likely eligible health care facilities nearby. This likely will require some training for potential customers. School and library associations have done a good job at alerting their members that this subsidy exists – but I’m guessing the same has not been done with rural health care providers. An ISP willing to tackle the filings can gain a great customer while also benefitting their community.

Millimeter Wave 5G is Fiber-to-the-Curb

I’ve been thinking about and writing about 5G broadband using millimeter wave spectrum for over a year. This is the broadband product that Verizon launched in Sacramento and a few other markets as a trial last year. I don’t know why it never struck me that this technology is the newest permutation of fiber-to-the curb.

That’s an important distinction to make because naming it this way makes it clear to anybody hearing about the technology that the network is mostly fiber with wireless only for the last few hundred feet.

I remember seeing a trial of fiber-to-the-curb back in the very early 2000s. A guy from the horse country in Virginia had developed the technology of delivering broadband from the pole into the home using radios. He had a working demo of the technology at his rural home. Even then he was beaming fast speeds – his demo delivered an uncompressed video signal from curb to home. He knew that the radios could be made capable of a lot more speed, but in those days I’m sure he didn’t think about gigabit speeds.

The issues that stopped his idea from being practical have been a barrier until recently. There was first the issue of getting the needed spectrum. He wanted to use what we now call midrange spectrum, but which were considered as high spectrum bands in 2000 – he would have to convince the FCC to carve out a slice of spectrum for his application, something that’s always been difficult. He also didn’t have any practical way of getting the needed bandwidth to the pole. ISP’s were still selling T1s, 1 Mbps DSL, and 1 Mbps cable modem service, and while fiber existed, the electronics cost for terminating fiber to devices on multiple poles was astronomical. Finally, even then, this guy had a hard time explaining how it would be cheaper to use wireless to get to the home rather than building a drop wire.

Verizon press releases would make you think that they will be conquering the world with millimeter wave radios and deploying the technology everywhere. However, once you think of this as fiber-to-the-curb that business plan quickly makes no sense. The cost of a fiber-to-the-curb network is mostly in the fiber. Any saving from using millimeter wave radios only applies to the last few hundred feet. For this technology to be compelling the savings for the last hundred feed has to be significant. Do the radio electronics really cost less for wireless compared to the cost of fiber drops and fiber electronics?

Any such comparison must consider all the costs of each technology – meaning the cost of installations, repairs, maintenance, and periodic replacement of electronics. And the comparisons need to be honest. For example, every other wireless technology I know requires more maintenance truck roles than fiber-based technologies due to the squirrelly nature of how wireless behaves in the wild.

Even should the radios become much cheaper than fiber drops, the business case for the technology might still have no legs. There is no way to get around the underlying fact that fiber-to-the-curb means building fiber along residential streets. Verizon has always said that they didn’t extend their fiber FiOS network to neighborhoods where the construction costs were too high. Verizon still seems to be the most cautious of the big ISPs and it’s hard to think that they’ve changed this philosophy. Perhaps the Verizon business plan is to cherry pick in markets outside their footprint, but only where they have the low-cost option of overlashing fiber. If that’s their real business plan then they will not be conquering the world with 5G, but just cherry picking neighborhoods that meet their price profile – a much smaller footprint and business plan than most of the industry is expecting.

My hope is that the rest of the industry starts referring to this technology as fiber-to-the-curb instead of calling it 5G. The wireless companies have gained great advantage from using the 5G name for multiple technologies. They have constantly used the speeds from the fiber-to-the-curb trials and the hot spot trials to make the public think the future means gigabit cellular service. It’s time to start demystifying 5G and using a different name for the different technologies.

Once this is understood it ought to finally be clear that millimeter wave fiber-to-the-curb is not coming everywhere. This sounds incredibly expensive to build in neighborhoods with already-buried utilities. Where density is low it might turn out that fiber-to-the-curb is more expensive than fiber-to-the-home. The big cost advantage seems to come from hitting multiple homes from one pole transmitter. Over time, when anybody can buy the needed components of the technology the best business case will become apparent to us all – for now the whole industry is guessing about what Verizon is doing because we don’t understand the basic costs of the technology.

At the end of the day this is just another new technology to put into the quiver when designing last mile networks. There will undoubtably be places where fiber-to-the-curb has a cost advantage over fiber drops. Assuming that Verizon or somebody else builds enough of the technology to pull hardware prices down, I picture a decade from now that fiber overbuilds will consider fiber-to-the-curb as part of the mix in designing the last few hundred feet.