US Has Poor Cellular Video

Opensignal recently published a report that looks around the world at the quality of cellular video. Video has become a key part of the cellular experience as people are using cellphones for entertainment, and since social media and advertising have migrated to video.

The use of cellular video is exploding. Netflix reports that 25% of its total streaming worldwide is sent to mobile devices. The new Disney+ app that was just launched got over 3 million downloads of their cellular app in just the first 24 hours. The Internet Advertising Bureau says that 62% of video advertisements are being seen on cellphones. Social media sites that are video-heavy like Instagram and Tik-Tok are growing rapidly.

The pressure on cellular networks to deliver high-quality video is growing. Ericcson recently estimated that video will grow to be almost 75% of all cellular traffic by 2024, up from 60% today. Look back five years, and video was a relatively small component of cellular traffic. To some extent, US carriers have contributed to the issue. T-Mobile includes Netflix in some of its plans; Sprint includes Hulu or Amazon Prime; Verizon just started bundling Disney+ with cellular plans; and AT&T offers premium movie services like HBO or Starz with premium plans.

The quality of US video was ranked 68 out of 100 countries, the equivalent of an F grade. That places our wireless video experience far behind other industrialized countries and puts the US in the same category as a lot of countries from Africa, and South and Central America. One of the most interesting statistics about US video watching is that 38% of users watch video at home using a cellular connection rather than their WiFi connection. This also says a lot about the poor quality of broadband connections in many US homes.

Interestingly, the ranking of video quality is not directly correlated with cellular data speeds. For example, South Korea has the fastest cellular networks but ranked 21st in video quality. Canada has the third-fastest cellular speeds and was ranked 22nd in video quality. The video quality rankings are instead based upon measurable metrics like picture quality, video loading times, and stall rates. These factors together define the quality of the video experience.

One of the reasons that US video quality was rated so low is that the US cellular carriers transmit video at the lowest compression possible to save on network bandwidth. The Opensignal report speculates that the primary culprit for poor US video quality is the lack of cellular spectrum. US cellular carriers are now starting to implement new spectrum bands into phones and there are more auctions for mid-range spectrum coming next year. But it takes 3-4 years to fully integrate new spectrum since it takes time for the cellular carriers to upgrade cell sites and even longer for handsets using a new spectrum to widely penetrate the market.

Only six countries got an excellent rating for video quality – Norway, Czech Republic, Austria, Denmark, Hungary, and the Netherlands. Meanwhile, the US is bracketed on the list between Kyrgyzstan and Kazakhstan.

Interestingly, the early versions of 5G won’t necessarily improve video quality. The best example of this is South Korea that already has millions of customers using what is touted as 5G phones. The country is still ranked 21st in terms of video quality. Cellular carriers treat cellular traffic differently than other data, and it’s often the video delivery platform that is contributing to video problems.

The major fixes to the US cellular networks are at least a few years away for most of the country. The introduction of more small cells, the implementation of more spectrum, and the eventual introduction of the 5G features from the 5G specifications will contribute to a better US cellular video experience. However, with the volume of US cellular broadband volumes doubling every two years, the chances are that the US video rating will drop more before improving significantly. The network engineers at the US cellular companies face an almost unsolvable problem of maintaining network quality while dealing with unprecedented growth.

Modems versus Routers

I have to admit that today’s blog is the result of one of my minor pet peeves – I find myself wincing a bit whenever I hear somebody interchange the words modem and router. That’s easy enough to do since today there are a lot of devices in the world that include both a modem and a router. But for somebody who’s been around since the birth of broadband, there is a big distinction. Today’s blog is also a bit nostalgic as I recalled the many kinds of broadband I’ve used during my life.

Modems. A modem is a device that connects a user to an ISP. Before there were ISPs, a modem made a data connection between two points. Modems are specific to the technology being used to make the connection.

In the picture accompanying this blog is an acoustic coupler, which is a modem that makes a data connection using the acoustic signals from an analog telephone. I used a 300 baud modem (which communicated at 300 bps – bits per second) around 1980 at Southwestern Bell when programming in basic. The modem allowed me to connect my telephone to a company mainframe modem and ‘type’ directly into programs stored on the mainframe.

Modems grew faster over time and by the 1990s we could communicate with a dial-up ISP. The first such modem I recalled using communicated at 28.8 kbps (28,800 bits per second). The technology was eventually upgraded to 56 kbps.

Around 2000, I upgraded to a 1 Mbps DSL modem from Verizon. This was a device that sat next to an existing telephone jack. If I recall, this first modem used ADSL technology. The type of DSL matters, because a customer upgrading to a different variety of DSL, such as VDSL2, has to swap to the appropriate modem.

In 2006 I was lucky enough to live in a neighborhood that was getting Verizon FiOS on fiber and I upgraded to 30 Mbps service. The modem for fiber is called an ONT (Optical Network Terminal) and was attached to the outside of my house. Verizon at the time was using BPON technology. A customer would have to swap ONTs to upgrade to newer fiber technologies like GPON.

Today I use broadband from Charter, delivered over a hybrid coaxial network. Cable modems use the DOCSIS standards developed by CableLabs. I have a 135 Mbps connection that is delivered using a DOCSIS 3.0 modem. If I want to upgrade to faster broadband, I’d have to swap to a DOCSIS 3.1 modem – the newest technology on the Charter network.

Routers. A router allows a broadband connection to be split to connect to multiple devices. Modern routers also contain other functions such as the ability to create a firewall or the ability to create a VPN connection.

The most common kind of router in homes is a WiFi router that can connect multiple devices to a single broadband connection. My first WiFi router came with my Verizon FiOS service. It was a single WiFi device intended to serve the whole home. Unfortunately, my house at the time was built in the 1940s and had plaster walls with metal lathing, which created a complete barrier to WiFi signals. Soon after I figured out the limitations on the WiFi I bought my first Ethernet router and used it to string broadband connections using cat 5 cables to other parts of the house. It’s probably good that I was single at the time because I had wires running all over the house!

Today it’s common for an ISP to combine the modem (which talks to the ISP network) and the router (which talks to the devices in the home) into a single device. I’ve always advised clients to not combine the modem and the WiFi router because if you want to upgrade only one of those two functions you have to replace the device. With separate devices, an ISP can upgrade just one function. That’s going to become an issue soon for many ISPs when customers start asking the ISPs to provide WiFi 6 modems.

Some ISPs go beyond a simple modem and router. For example, most Comcast broadband service to single-family homes provide a WiFi router for the home and a second WiFi router that broadcasts to nearby customers outside the home. These dual routers allow Comcast to claim to have millions of public WiFi hotspots.  Many of my clients are now installing networked router systems for customers where multiple routers share the same network. These network systems can provide strong WiFi throughout a home, with the advantage that the same passwords are usable at each router.

FCC Proposes New WiFi Spectrum

On December 17 the FCC issued a Notice of Proposed Rulemaking for the 5.9 GHz spectrum band that would create new public spectrum that can be used for WiFi or other purposes. The 5.9 GHz spectrum band was previously assigned in 2013 to support DSRC (Dedicated Short Range Communications), a technology to communicate between cars, and between cars and infrastructure. The spectrum band covered by the order is 75 megahertz wide. The FCC suggests that the lower 45 megahertz be made available to anybody as new public spectrum. They’ve assigned the highest 20 megahertz for a newer smart car technology called C-V2X. The FCC tentatively kept the remaining bandwidth for the older DSRC technology, dependent upon the users of that technology convincing the agency that it’s viable – otherwise, it also converts to C-V2X usage.

DSRC technology has been around for twenty years. The goal of the technology is to allow cars to communicate with each other and to communicate with infrastructure like toll booths or traffic measuring sensors. One of the biggest benefits touted for DSRC is increased safety so that cars will know what’s going on around them, such as when a car ahead is braking suddenly.

For the new technology, the V2X stands for vehicle-to-everything. Earlier this year Ford broke from the rest of the industry and dropped research in DSRC communications in favor of C-V2X. Ford says they will introduce C-V2X into their whole fleet in 2022. Ford touts the technology as enabling cars to ‘see around corners’ due to the ability to gather data from other cars in the neighborhood. They believe the new technology will improve safety, reduce accidents, allow things like safely forming convoys of vehicles on open highways, and act as an important step towards autonomous cars. C-V2X uses the 3GPP standard and provides an easy interface between 5G and vehicles.

This decision was not without controversy. The Department of Transportation strenuously opposed the reduction of spectrum assigned for vehicle purposes. The DOT painted the picture of the spectrum providing a huge benefit for traffic safety in the future, while the FCC argued that the auto industry has done a poor job of developing applications to use the spectrum.

This is an NPRM, meaning that there will be a cycle of public comments before the FCC votes on the order. I think we can expect major filings by the transportation industry describing reasons why taking away most of this spectrum is a bad idea. On the day of the FCC vote, Elaine Chao, the Secretary of Transportation said that the FCC is valuing Netflix over public safety – so this could yet become an ugly fight.

Perhaps the biggest news from the announcement is the big slice of the spectrum that will be repositioned for public use – a decision praised by the WiFi Alliance. The FCC proposes to make this public spectrum that is open to everybody, not just specifically for WiFi. The order anticipates that 5G carriers might use the spectrum for cellular offload. If the cellular carriers heavily use the spectrum in urban areas, then the DOT might be right and this might be a giveaway of 5G spectrum without an auction.

There is no guarantee that the cellular carriers will heavily use the spectrum. Recall a few years ago there was the opportunity for the cellular carriers to dip into the existing WiFi spectrum using LTE-U to offload busy cellular networks. The carriers used LTE-U much less than anticipated by the WiFi industry, which had warned that cellular offload could overwhelm WiFi. It turns out the cellular carriers don’t like spectrum where they have to deal with unpredictable interference.

Even if the cellular carriers use the spectrum for cellular offload in urban areas, the new public block ought to be mostly empty in rural America. That will create an additional spectrum band to help boost point-to-multipoint radios.

Regardless of how the new spectrum might be used outdoors, it ought to provide a boost to indoor WiFi. The spectrum sits just a little higher than the current 5.4 GHz WiFi band and should significantly boost home WiFi speeds and volume capability. The new spectrum will provide an opportunity to reduce interference with existing WiFi networks by providing more channels for spread home use.

This particular docket shows why spectrum decisions at the FCC are so difficult. Every potential use for this mid-range spectrum creates significant public good. How do you weigh safer driving against better 5G or against better rural broadband?

Immersive Virtual Reality

In case you haven’t noticed, virtual reality has moved from headsets to the mall. At least two companies now offer an immersive virtual reality experience that goes far beyond what can be experienced with only a VR headset at home.

The newest company is Dreamscape Immersive that has launched virtual reality studies in Los Angeles and Dallas, with more outlets planned. The virtual reality experience is enhanced by the use of a headset, hand and foot trackers, and a backpack holding the computers. The action occurs within a 16X16 room with vibrating haptic floor (responds to actions of the participant). This all equates to an experience where a user can reach out and touch objects or can walk around all sides of a virtual object in the environment.

The company has launched with three separate adventures, each lasting roughly 15 minutes. In Alien Zoo the user visits a zoo populated by exotic and endangered animals from around the galaxy. In The Blu: Deep Rescue users try to help reunite a lost whale with its family. The Curse of the Lost Pearl feels like an Indiana Jones adventure where the user tries to find a lost pearl.

More established is The Void, which has launched virtual reality adventure sites in sixteen cities, with more planned. The company is creating virtual reality settings based upon familiar content. The company’s first VR experience was based on Ghostbusters. The current theme is Star Wars: Secrets of the Empire.

The Void lets users wander through a virtual reality world. The company constructs elaborate sets where the walls and locations of objects in the real-life set correspond to what is being seen in the virtual reality world. This provides users with real tactile feedback that enhances the virtual reality experience.

You might be wondering what these two companies and their virtual reality worlds have to do with broadband. I think they provide a peek at what virtual reality in the home might become in a decade. Anybody who’s followed the growth of video games can remember how the games started in arcades before they were shrunk to a format that would work in homes. I think the virtual reality experiences of these two companies are a precursor to the virtual reality we’ll be having at home in the not-too-distant future.

There is already a robust virtual reality gaming industry, but it relies entirely on providing a virtual reality experience through the use of goggles. There are now many brands of headsets on the market, ranging from the simple cardboard headset from Google to more expensive headsets from companies like Oculus Rift, Nintendo, Sony, HTC, and Lenovo. If you want to spend an interesting half an hour, you can see the current most popular virtual reality games at this review from PCGamer. To a large degree, virtual reality gaming has been built modeled on existing traditional video games, although there are some interesting VR games that are now offering content that only makes sense in 3D.

The whole video game market is in the process of moving content online, with the core processing of the gaming experience done in data centers. While most games are still available in more traditional formats, gamers are increasingly connecting to a gaming cloud and need a broadband connection akin in size to a 4K video stream. Historically, many games have been downloaded, causing headaches for gamers with data caps. Playing the games in the cloud can still chew up a lot of bandwidth for active gamers but avoids the giant gigabyte downloads.

If history is a teacher, the technologies used by these two companies will eventually migrate to homes. We saw this migration occur with first-generation video games – there were video arcades in nearly every town, but within a decade those arcades got displaced by the gaming boxes in the home that delivered the same content.

When the kind of games offered by The Void and Dreamscape Immersive reach the home they will ramp up the need for home broadband. It’s not hard to imagine immersive virtual reality needing 100 Mbps speeds or greater for one data stream. These games are the first step towards eventually having something resembling a home holodeck – each new generation of gaming is growing in sophistication and the need for more bandwidth.

Improving Rural Wireless Broadband

Microsoft has been implementing rural wireless broadband using white space spectrum – the slices of spectrum that sits between traditional TV channels. The company announced a partnership with ARK Multicasting to introduce a technology that will boost the efficiency of fixed wireless networks.

ARK Multicasting does just what their name implies. Today about 80% of home broadband usage is for video, and ISPs unicast video, meaning that the send a separate stream of a given video to each customer that wants to watch it. If ten customers in a wireless node are watching the same new Netflix show, the ISP sends out ten copies of the program. Today, in even a small wireless node of a few hundred customers an ISP might be transmitting dozens of simultaneous copies of the most popular content in an evening. The ARK Multicasting technology will send out just one copy of the most popular content on the various OTT services like Netflix, Amazon Prime, and Apple TV. This one copy will be cached in an end user storage device, and if a customer elects to watch the new content they view it from the local cache.

The net impact of multicasting should be a huge decrease in demand for video content during peak network hours. It would be interesting to know the percentage of video viewing in a given week comes from watching newly released content. I’m sure all of the OTT providers know that number, but I’ve never seen anybody talk about it. If anybody knows that statistic, please post in reply comments to this blog. Anecdotal evidence suggests the percentage is significant because people widely discuss new content on social media soon after it’s released.

The first trial of the technology is being done in conjunction with a Microsoft partner wireless network in Crockett. Texas. ARK Multicasting says that it is capable of transmitting 7-10 terabytes of content per month, which equates to 2,300 – 3,300 hours of HD video. We’ll have to wait to see the details of the deployment, but I assume that Microsoft will provide the hefty CPE capable of multi-terabyte storage – there are no current consumer settop boxes with that much capacity. I also assume that cellphones and tablets will grab content using WiFi from the in-home storage device since there are no tablets or cellphones with terabyte storage capacity.

To be effective ARK must be deleting older programming to make room for new, meaning that the available local cache will always contain the latest and most popular content on the various OTT platforms.

There is an interesting side benefit of the technology. Viewers should be able to watch cached content even if they lose the connection to the ISP. Even after a big network outage due to a storm, ISP customers should still be able to watch many hours of popular content.

This is a smart idea. The weakest part of the network for many fixed wireless systems is the backhaul connection. When a backhaul connection gets stressed during the busiest hours of network usage all customers on a wireless node suffer from dropped packets, pixelization, and overall degraded service. Smart caching will remove huge amounts of repetitive video signals from the backhaul routes.

Layering this caching system onto any wireless system should free up peak evening network resources for other purposes. Fixed wireless systems are like most other broadband technologies where the bandwidth is shared between users of a given node. Anything that removes a lot of video downloading at peak times will benefit all users of a node.

The big OTT providers already do edge-caching of content. Providers like Netflix, Google, and Amazon park servers at or near to ISPs to send local copies of the latest content. That caching saves a lot of bandwidth on the internet transport network. The ARK Multicasting will carry caching down to the customer level and bring the benefits of caching to the last-mile network.

A lot of questions come to mind about the nuances of the technology. Hopefully the downloads are done in the slow hours of the network so as to not to add to network congestion. Will all popular content be sent to all customers – or just content from the services they subscribe to? The technology isn’t going to work for an ISP with data caps because the cashing means customers might be downloading multiple terabytes of data that may never be viewed.

I assume that if this technology works well that ISPs of all kinds will consider it. One interesting aspect of the concept is that this means getting ISPs back into the business of supplying boxes to customers – something that many ISPs avoid as much as possible. However, if it works as described, this caching could create a huge boost to last-mile networks by relieving a lot of repetitive traffic, particularly at peak evening hours. I remember local caching being tried a decade or more ago, but it never worked as promised. It will be interesting to see if Microsoft and ARK can pull this off.

A New Technology for MDU Broadband

A Canadian company recently announced a new device that promises the ability to deliver gigabit speeds inside of MDUs using existing copper or coaxial wiring. The company is Positron Access Solutions and I talked to their CTO and president, Pierre Trudeau at the recent Broadband Communities event in Washington DC. Attached is an article and a PowerPoint talking about the new technology.

The technology is built upon a framework of the G.hn standards. You might remember this as the standard supporting powerline carrier that was used before WiFi to distribute broadband around the home using the electrical wiring in the home. G.hn over powerline was a sufficient technology when broadband speeds were slow but didn’t scale up to support faster broadband speeds. In thinking back, I recall that the biggest limitation was that there are dozens of different types of electrical wires used in homes over the last century and it was hard to have a technology that worked as promised over the various sizes and types of in-home wiring.

Positron has been around for many years and manufactures IP PBX systems and DSL extenders. They are referring to the new technology as GAM, which I take to mean G.hn Access Network.

The company says that the technology will deliver a gigabit signal about 500 feet over telephone copper wires and over 4,000 feet on coaxial cable. Large MDUs delivering the technology using telephone copper might require spacing a few devices throughout parts of the network.

The technology operates on unused frequency bands on the copper cables. For example, on telephone copper, the technology can coexist on a telephone wire that’s already carrying telephone company voice. On coaxial cable, the Positron device can coexist with satellite TV from DirecTV or Dish Networks but can’t coexist with a signal from a traditional cable company.

Positron says they are a natural successor to G.Fast which has never gotten a lot of traction in the US. Positron says they can deliver more bandwidth with less noise than G.Fast. The Positron GAM spits out Ethernet at the customer apartment unit and can be used with any existing CPE like WiFi routers, computers, TVs, etc.

This is a new technology and the company currently has only a few test units at clients in the field. Like all new technology, a company should consider this as a beta technology where the vendor will be working out field issues. But this technology has a lot of promise if perfected. There are a lot of older MDUs where the cost of rewiring is prohibitive or where the building owners don’t want fiber strung through hallways. Getting to apartment units through existing copper wiring should be less disruptive, less expensive and faster to market.

I always caution all of my clients about using first-generation technology. It’s bound to suffer from issues that aren’t discovered until deployed in real-world situations. First-generation equipment is always a risk since many vendors have abandoned product lines that have too many field problems. The supply chain is often poorly defined, although in the case of Positron the company has been providing technical support for many years. My main concern with beta technology is that it’s never comfortable using end-user customers as guinea pigs.

However, an MDU might be the perfect environment to try new technology. Many MDUs have been unable to attract better broadband due to high rewiring costs and might be willing to work with an ISP to test new technology. If this technology operates as touted it could provide a cost-effective way to get broadband into MDUs, particularly older ones where rewiring is a cost barrier.

The Future of Coaxial Networks

My blog devotes a lot of time looking at fiber deployment, but since the majority of people in the US get broadband from cable companies using hybrid fiber/coaxial (HFC) technology, today’s blog looks at the next generation of changes planned for HFC.

DOCSIS 4.0. The current generation of HFC technology is DOCSIS 3.1 This technology uses 1.2 GHz of spectrum over coaxial cable. DOCSIS 3.1 has several competitive drawbacks compared to fiber. First, while the technology can deliver gigabit download speeds to customers, the dirty secret of the industry is that gigabit speeds can only be given to a limited number of customers. With current node sizes, cable companies can’t support very many large data users without sacrificing the performance of everybody in a node. This is why you don’t see cable companies pricing gigabit broadband at competitive prices or pushing it very hard.

The other big drawback is that upload speeds on DOCSIS 3.1 are set by specification to be no more than one-eighth of the total bandwidth on the system. Most cable companies don’t even allocate that much to upload speeds.

The primary upgrade with DOCSIS 4.0 will be to increase system bandwidth to 3 GHz. That supplies enough additional bandwidth to provide symmetrical gigabit service or else offer products that are faster than 1 Gbps download. It would also allow a cable company to support a lot more gigabit customers.

The big drawback to the upgrade is that many older coaxial cables won’t be able to handle that much bandwidth and will have to be replaced. Further, upgrading to 3 GHz is going to mean replacing or upgrading power taps, repeaters, and other field hardware in the coaxial network. CableLabs is talking about finalizing the DOCSIS 4.0 specification by the end of 2020. None of the big cable companies have said if and when they might embrace this upgrade. It seems likely that many of the bigger cable companies are in no hurry to make this upgrade.

Low Latency DOCSIS (LLD). Another drawback of HFC networks is that they don’t have the super-low latency needed to support applications like intense gaming or high-quality video chat. The solution is a new encoding scheme being called low latency DOCSIS (LLD).

The LLD solution doesn’t change the overall latency of the cable network but instead prioritizes low-latency applications. The result is to increase the latency for other applications like web-browsing and video streaming.

This can be done because most of the latency on an HFC network comes from the encoding schemes used to layer broadband on top of cable TV signals. The encoding schemes on coaxial cable networks are far more complex than fiber encoding. There are characteristics of copper wires that cause natural interference within a transmission path. A coaxial encoding scheme must account for attenuation (loss of signal over distance), noise (the interference that appears from external sources since copper acts as a natural antenna), and jitter (the fact that interference is not linear and comes and goes in bursts). Most of the latency on a coaxial network comes from the encoding schemes that deal with these conflicting characteristics. The LLD solution bypasses traditional encoding for the handful of applications that need low latency.

Virtual CMTS. One of the more recent improvements in coaxial technology was distributed access architecture (DAA). This technology allows for disaggregating the CMTS (the router used to provide customer broadband) from core routing functions, meaning that the CMTS no longer has to sit at the core of the network. The easiest analogy to understand DAA is to consider modern DSLAM routers. Telephone companies can install a DSLAM at the core of the network, but they can instead put the DSLAM at the entrance to a subdivision to get it closer to customers. DAA allowed cable companies to make this same change.

With virtual CMTS a cable network takes DAA a step further. In a virtual CMTS environment, the cable company might perform some of the CMTS functions in remote data centers in the cloud. There will still be a piece of electronics where the CMTS used to sit, but many of the computing functions can be done remotely.

A cloud-based CMTS offers some advantages to the cable operator:

  • Allows for customizing portions of a network. The data functions provided to a business district can be different from what is supplied to a nearby residential neighborhood. Customization can even be carried down to the customer level for large business customers.
  • Allows for the use of cheap off-the-shelf hardware, similar to what’s been done in the data centers used by the big data complies like Google and Facebook. CMTS hardware has always been expensive because it’s been made by only a few vendors.
  • Improves operations by saving on local resources like local power, floor/rack space, and cooling by moving heavy computing functions to data centers.

Summary. There is a lot of discussion within the cable industry asking how far cable companies want to push HFC technology. Every CEO of the major cable companies has said that their eventual future is fiber, and the above changes, which each bring HFC closer to fiber performance, are still not as good as fiber. Some Wall Street analysts have predicted that cable companies won’t embrace bandwidth upgrades for a while since they already have the marketing advantage of being able to claim gigabit speeds. The question is if the cable companies are willing to make the expensive investment to functionally come closer to fiber performance or if they are happy to just claim to be equivalent to fiber performance.

Do Cable Companies Have a Wireless Advantage?

The big wireless companies have been wrangling for years with the issues associated with placing small cells on poles. Even with new FCC rules in their favor, they are still getting a lot of resistance from communities. Maybe the future of urban/suburban wireless lies with the big cable companies. Cable companies have a few major cost advantages over the wireless companies including the ability to bypass the pole issue.

The first advantage is the ability to deploy mid-span cellular small cells. These are cylindrical devices that can be placed along the coaxial cable between poles. I could not find a picture of these devices and the picture accompanying this article is of a strand-mounted fiber splice box – but it’s s good analogy since the size and shape of the strand-mounted small cell device is approximately the same size and shape.

Strand-mounted small cells provide a cable company with a huge advantage. First, they don’t need to go through the hassle of getting access to poles and they avoid paying the annual fees to rent space on poles. They also avoid the issue of fiber backhaul since each unit can get broadband using a DOCSIS 3.1 modem connection. The cellular companies don’t talk about backhaul a lot when they discuss small cells, but since they don’t own fiber everywhere, they will be paying a lot of money to other parties to transport broadband to the many small cells they are deploying.

The cable companies also benefit because they could quickly deploy small cells anywhere they have coaxial cable on poles. In the future when wireless networks might need to be very dense the cable companies could deploy a small cell between every pair of poles. If the revenue benefits of providing small cells is great enough, this could even prompt the cable companies to expand the coaxial network to nearby neighborhoods that might not otherwise meet their density tests, which for most cable companies is to only build where there are at least 15 to 20 potential customers per linear mile of cable.

The cable companies have another advantage over the cellular carriers in that they have already deployed a vast WiFi network comprised of customer WiFi modems. Comcast claims to have 19 million WiFi hotspots. Charter has a much smaller 500,000 hotspots but could expand that count quickly if needed. Altice is reportedly investing in WiFi hotspots as well. The big advantage of WiFi hotspots is that the broadband capacity of the hotspots can be tapped to act as landline backhaul for cellular data and even voice calls.

The biggest cable companies are already benefitting from WiFi backhaul today. Comcast just reported to investors that they added 204,000 wireless customers in the third quarter of 2019 and now have almost 1.8 million wireless customers. Charter is newer to the wireless business and added 276,000 wireless customers in the third quarter and now has almost 800,000 wireless customers.

Both companies are buying wholesale cellular capacity from Verizon under an MVNO contract. Any cellular minute or cellular data they can backhaul with WiFi doesn’t have to be purchased from Verizon. If the companies build small cells, they would further free themselves from the MVNO arrangement – another cost savings.

A final advantage for the cable companies is that they are deploying small cell networks where they already have a workforce to maintain the network. Bother AT&T and Verizon have laid off huge numbers of workers over the last few years and no longer have the fleets of technicians in all of the markets where they need to deploy cellular networks. These companies are faced with adding technicians where their network is expanding from a few big-tower cell sites to vast networks of small cells.

The cable companies don’t have nearly as much spectrum as they wireless companies, but they might not need it. The cable companies will likely buy spectrum in the upcoming CBRS auction and the other mid-range spectrum auctions over the next few years. They can use the 80 MHz of free CBRS spectrum that’s available everywhere.

These advantages equate to a big cost advantage for the cable companies. They save on speed to market and avoid paying for pole-mounted small cells. Their networks can provide the needed backhaul for practically free. They can offload a lot of cellular data through the customer WiFi hotspots. And the cable companies already have a staff to maintain the small cell sites. At least in the places that have aerial coaxial networks, the cellular companies should have higher margins than the cellular companies and should be formidable competitors.

Keeping an Eye on the Future

The IEEE, the Institute of Electrical and Electronics Engineers, has been issuing a document annually that lays out a roadmap to make sure that the computer chips that drive all of our technologies are ready for the future. The latest such document is the 2019 Heterogeneous Integration Roadmap (HIR). The purpose of the document is to encourage the needed research and planning so that the electronics industry creates interoperable chips that anticipate the coming computer needs while also functioning across multiple industries.

This is particularly relevant today because major technologies are heading in different directions. Fields like 5G, quantum computing, AI, IoT, gene splicing, and self-driving vehicles are all pursuing different technology solutions that could easily result in specialized one-function chips. That’s not necessarily bad, but the IEEE believes that all technologies will benefit if chip research and manufacturing processes are done in such a way as to accommodate a wide range of industries and solutions.

IEEE uses the label of ‘heterogeneous integration’ to describe the process of creating a long-term vision for the electronics industry. They identify this HIR effort as the key technology going forward that is needed to support the other technologies. They envision a process where standard and separately manufactured chip components can be integrated to produce the chips needed to serve the various fields of technology.

The IEEE has created 19 separate technical working groups looking at specific topics related to HIR. This list shows both the depth and breadth of the IEEE effort. Working groups in 2019 include:

Difficult Challenges

  • Single chip and multichip packaging (including substrates)
  • Integrated photonics (including plamonics)
  • Integrated power devices
  • MEMS (miniaturization)
  • RF and analog mixed signals

Cross Cutting Topics

  • Emerging research materials
  • Emerging research devices
  • Interconnect
  • Test
  • Supply chain

Integrated Processes

  • SiP
  • 3D + 2.5D
  • WLP (wafer level packaging)

Packaging for Specialized Applications

  • Mobile
  • IoT and wearable
  • Medical and health
  • Automotive
  • High performance computing
  • Aerospace and defense

Just a few years ago many of the specific technologies were not part of the HIR process. The pace of technological breakthroughs is so intense today that the whole process of introducing new chip technology could easily diverge. The IEEE believes that taking a holistic approach to the future of computing will eventually help all fields as the best industry practices and designs are applied to all new chips.

The effort behind the HIR process is substantial since various large corporations and research universities provide the talent needed to dig deeply into each area of research. I find it comforting that the IEEE is working behind the scenes to make sure that the chips needed to support new technologies can be manufactured efficiently and affordably. Without this effort the cost of electronics for broadband networks sand other technologies might skyrocket over time.